Large distributed systems benefit from the ability to exchange jobs between nodes to share the overall workload. To exchange jobs, nodes rely on probe messages that are either generated by lightly-loaded or highly-loaded nodes, which corresponds to a so-called pull or push strategy. A key quantity of any pull or push strategy, that has often been neglected in prior studies, is the resulting overall probe rate. If one strategy outperforms another strategy in terms of the mean delay, but at the same time requires a higher overall probe rate, it is unclear whether it is truly more powerful. In this paper we introduce a new class of rate-based pull and push strategies that can match any predefined maximum allowed probe rate, which allows one to compare the pull and push strategy in a fair manner. We derive a closed form expression for the mean delay of this new class of strategies in a homogeneous network with Poisson arrivals and exponential job durations under the infinite system model. We further show that the infinite system model is the proper limit process over any finite time scale as the number of nodes in the system tends to infinity and that the convergence extends to the stationary regime. Simulation experiments confirm that the infinite system model becomes more accurate as the number of nodes tends to infinity, while the observed error is already around 1% for systems with as few as 100 nodes.
{"title":"Improved Rate-Based Pull and Push Strategies in Large Distributed Networks","authors":"Wouter Minnebo, B. V. Houdt","doi":"10.1109/MASCOTS.2013.22","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.22","url":null,"abstract":"Large distributed systems benefit from the ability to exchange jobs between nodes to share the overall workload. To exchange jobs, nodes rely on probe messages that are either generated by lightly-loaded or highly-loaded nodes, which corresponds to a so-called pull or push strategy. A key quantity of any pull or push strategy, that has often been neglected in prior studies, is the resulting overall probe rate. If one strategy outperforms another strategy in terms of the mean delay, but at the same time requires a higher overall probe rate, it is unclear whether it is truly more powerful. In this paper we introduce a new class of rate-based pull and push strategies that can match any predefined maximum allowed probe rate, which allows one to compare the pull and push strategy in a fair manner. We derive a closed form expression for the mean delay of this new class of strategies in a homogeneous network with Poisson arrivals and exponential job durations under the infinite system model. We further show that the infinite system model is the proper limit process over any finite time scale as the number of nodes in the system tends to infinity and that the convergence extends to the stationary regime. Simulation experiments confirm that the infinite system model becomes more accurate as the number of nodes tends to infinity, while the observed error is already around 1% for systems with as few as 100 nodes.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134413751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a new algorithm to achieve a distributed leader election in a broadcast channel that is more efficient than the classic Part-and-Try algorithm. The algorithm has the advantage of having a reduced overhead log logN rather than log N. More importantly, the algorithm has a greatly reduced energy consumption since it requires O(N1=k) burst transmissions instead of O(N=k), per election, k being a parameter depending on the physical properties of the medium of communication. The algorithm has interesting potential applications in cognitive wireless networking.
{"title":"A Novel Energy Efficient Broadcast Leader Election","authors":"P. Jacquet, Dimitris Milioris, P. Mühlethaler","doi":"10.1109/MASCOTS.2013.71","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.71","url":null,"abstract":"We introduce a new algorithm to achieve a distributed leader election in a broadcast channel that is more efficient than the classic Part-and-Try algorithm. The algorithm has the advantage of having a reduced overhead log logN rather than log N. More importantly, the algorithm has a greatly reduced energy consumption since it requires O(N1=k) burst transmissions instead of O(N=k), per election, k being a parameter depending on the physical properties of the medium of communication. The algorithm has interesting potential applications in cognitive wireless networking.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131734855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vasileios Spiliopoulos, Akash Bagdia, Andreas Hansson, P. Aldworth, S. Kaxiras
Dynamic Voltage and Frequency Scaling (DVFS) is an essential part of controlling the power consumption of any computer system, ranging from mobile phones to servers. DVFS efficiency relies on hardware-software co-optimization, thus using existing hardware cannot reveal the full optimization potential beyond the current implementation's characteristics. To explore the vast design space for DVFS efficiency, that straddles software and hardware, a simulation infrastructure must provide features that are not readily available today, for example: software controllable clock and voltage domains, support for the OS and the frequency scaling module of it, and an online power estimation methodology. As the main contribution, this work enables DVFS studies in a full-system simulator. We extend the gem5 simulator to support full-system DVFS modeling. By doing so, we enable energy-efficiency experiments to be performed in gem5 and we showcase such studies. Finally, we show that both existing and novel frequency governors for Linux and Android can be effortlessly integrated in the framework, and we evaluate the efficiency of different DVFS schemes.
{"title":"Introducing DVFS-Management in a Full-System Simulator","authors":"Vasileios Spiliopoulos, Akash Bagdia, Andreas Hansson, P. Aldworth, S. Kaxiras","doi":"10.1109/MASCOTS.2013.75","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.75","url":null,"abstract":"Dynamic Voltage and Frequency Scaling (DVFS) is an essential part of controlling the power consumption of any computer system, ranging from mobile phones to servers. DVFS efficiency relies on hardware-software co-optimization, thus using existing hardware cannot reveal the full optimization potential beyond the current implementation's characteristics. To explore the vast design space for DVFS efficiency, that straddles software and hardware, a simulation infrastructure must provide features that are not readily available today, for example: software controllable clock and voltage domains, support for the OS and the frequency scaling module of it, and an online power estimation methodology. As the main contribution, this work enables DVFS studies in a full-system simulator. We extend the gem5 simulator to support full-system DVFS modeling. By doing so, we enable energy-efficiency experiments to be performed in gem5 and we showcase such studies. Finally, we show that both existing and novel frequency governors for Linux and Android can be effortlessly integrated in the framework, and we evaluate the efficiency of different DVFS schemes.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129226587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-Defined Networking (SDN) approaches were introduced as early as the mid-1990s, but just recently became a well-established industry standard. Many network architectures and systems adopted SDN, and vendors are choosing SDN as an alternative to the fixed, predefined, and inflexible protocol stack. SDN offers flexible, dynamic, and programmable functionality of network systems, as well as many other advantages such as centralized control, reduced complexity, better user experience, and a dramatic decrease in network systems and equipment costs. However, SDN characterization and capabilities, as well as workload of the network traffic that the SDN-based systems handle, determine the level of these advantages. Moreover, the enabled flexibility of SDN-based systems comes with a performance penalty. The design and capabilities of the underlying SDN infrastructure influence the performance of common network tasks, compared to a dedicated solution. In this paper we analyze two issues: a) the impact of SDN on raw performance (in terms of throughput and latency) under various workloads, and b) whether there is an inherent performance penalty for a complex, more functional, SDN infrastructure. Our results indicate that SDN does have a performance penalty, however, it is not necessarily related to the complexity level of the underlying SDN infrastructure.
{"title":"Performance Analysis of Software-Defined Networking (SDN)","authors":"Alexander Gelberger, Niv Yemini, R. Giladi","doi":"10.1109/MASCOTS.2013.58","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.58","url":null,"abstract":"Software-Defined Networking (SDN) approaches were introduced as early as the mid-1990s, but just recently became a well-established industry standard. Many network architectures and systems adopted SDN, and vendors are choosing SDN as an alternative to the fixed, predefined, and inflexible protocol stack. SDN offers flexible, dynamic, and programmable functionality of network systems, as well as many other advantages such as centralized control, reduced complexity, better user experience, and a dramatic decrease in network systems and equipment costs. However, SDN characterization and capabilities, as well as workload of the network traffic that the SDN-based systems handle, determine the level of these advantages. Moreover, the enabled flexibility of SDN-based systems comes with a performance penalty. The design and capabilities of the underlying SDN infrastructure influence the performance of common network tasks, compared to a dedicated solution. In this paper we analyze two issues: a) the impact of SDN on raw performance (in terms of throughput and latency) under various workloads, and b) whether there is an inherent performance penalty for a complex, more functional, SDN infrastructure. Our results indicate that SDN does have a performance penalty, however, it is not necessarily related to the complexity level of the underlying SDN infrastructure.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131441533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current trace replay methods for file system evaluation fail to represent traced workloads accurately. When using a misrepresented workload one may take wrong conclusions about the system evaluation. For example, a system designer can miss performance problems if the replay of a trace produces an under loaded representation of the real workload. Even worse, one can take wrong design decisions, leading to optimization of untypical workloads. In this study, we captured and replayed traces from standard file systems using methods proposed in the literature, to exemplify the inaccuracy of state-of-art trace replay methods. We also exposed a shortcoming of current methodologies, in a replay of a general purpose workload trace, we observed a difference of up to 100% on request response time, caused by the choice of trace replay method.
{"title":"On the Accuracy of Trace Replay Methods for File System Evaluation","authors":"T. Pereira, Lívia M. R. Sampaio, F. Brasileiro","doi":"10.1109/MASCOTS.2013.56","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.56","url":null,"abstract":"Current trace replay methods for file system evaluation fail to represent traced workloads accurately. When using a misrepresented workload one may take wrong conclusions about the system evaluation. For example, a system designer can miss performance problems if the replay of a trace produces an under loaded representation of the real workload. Even worse, one can take wrong design decisions, leading to optimization of untypical workloads. In this study, we captured and replayed traces from standard file systems using methods proposed in the literature, to exemplify the inaccuracy of state-of-art trace replay methods. We also exposed a shortcoming of current methodologies, in a replay of a general purpose workload trace, we observed a difference of up to 100% on request response time, caused by the choice of trace replay method.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115032591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. A. Islam, D. Eager, Niklas Carlsson, A. Mahanti
This paper presents new results on characterization and modeling of user-generated video popularity evolution, based on a recent complementary data collection for videos that were previously the subject of an eight month data collection campaign during 2008/09. In particular, during 2011, we collected two contiguous months of weekly view counts for videos in two separate 2008/09 datasets, namely the ``recently-uploaded'' and the ``keyword-search'' datasets. These datasets contain statistics for videos that were uploaded within 7 days of the start of data collection in 2008 and videos that were discovered using a keyword search algorithm in 2008, respectively. Our analysis shows that the average weekly view count for the recently-uploaded videos had not decreased by the time of the second measurement period, in comparison to the middle and later portions of the first measurement period. The new data is used to evaluate the accuracy of a previously proposed model for synthetic view count generation for time periods that are substantially longer than previously considered. We find that the model yielded distributions of total (lifetime) video view counts that match the empirical distributions, however, significant differences between the model and empirical data were observed with respect to other metrics. These differences appear to arise because of particular popularity characteristics that change over time rather than being week-invariant as assumed in the model.
{"title":"Revisiting Popularity Characterization and Modeling of User-Generated Videos","authors":"M. A. Islam, D. Eager, Niklas Carlsson, A. Mahanti","doi":"10.1109/MASCOTS.2013.50","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.50","url":null,"abstract":"This paper presents new results on characterization and modeling of user-generated video popularity evolution, based on a recent complementary data collection for videos that were previously the subject of an eight month data collection campaign during 2008/09. In particular, during 2011, we collected two contiguous months of weekly view counts for videos in two separate 2008/09 datasets, namely the ``recently-uploaded'' and the ``keyword-search'' datasets. These datasets contain statistics for videos that were uploaded within 7 days of the start of data collection in 2008 and videos that were discovered using a keyword search algorithm in 2008, respectively. Our analysis shows that the average weekly view count for the recently-uploaded videos had not decreased by the time of the second measurement period, in comparison to the middle and later portions of the first measurement period. The new data is used to evaluate the accuracy of a previously proposed model for synthetic view count generation for time periods that are substantially longer than previously considered. We find that the model yielded distributions of total (lifetime) video view counts that match the empirical distributions, however, significant differences between the model and empirical data were observed with respect to other metrics. These differences appear to arise because of particular popularity characteristics that change over time rather than being week-invariant as assumed in the model.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128237529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The popularity of cloud-based interactive computing services (e.g., virtual desktops) brings new management challenges. Each interactive user leaves abundant but fluctuating residual resources while being intolerant to latency, precluding the use of aggressive VM consolidation. In this paper, we present the Resource Harvester for Interactive Clouds (RHIC), an autonomous management framework that harnesses dynamic residual resources aggressively without slowing the harvested interactive services. RHIC builds ad-hoc clusters for running throughput-oriented "background" workloads using a hybrid of residual and dedicated resources. These hybrid clusters offer significant gains over normal dedicated clusters: 20-40% cost and 20-29% energy savings in our test bed. For a given background job, RHIC intelligently discovers and maintains the ideal cluster size and composition, to meet user-specified goals such as cost/energy minimization or deadlines. RHIC employs black-box workload performance modeling, requiring only system-level metrics and incorporating techniques to improve modeling accuracy with bursty and heterogeneous residual resources. We demonstrate the effectiveness and adaptivity of our RHIC prototype with two parallel data analytics frameworks, Hadoop and HBase. Our results show that RHIC finds near-ideal cluster sizes and compositions across a wide range of workload/goal combinations.
{"title":"Accelerating Batch Analytics with Residual Resources from Interactive Clouds","authors":"R. Clay, Zhiming Shen, Xiaosong Ma","doi":"10.1109/MASCOTS.2013.63","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.63","url":null,"abstract":"The popularity of cloud-based interactive computing services (e.g., virtual desktops) brings new management challenges. Each interactive user leaves abundant but fluctuating residual resources while being intolerant to latency, precluding the use of aggressive VM consolidation. In this paper, we present the Resource Harvester for Interactive Clouds (RHIC), an autonomous management framework that harnesses dynamic residual resources aggressively without slowing the harvested interactive services. RHIC builds ad-hoc clusters for running throughput-oriented \"background\" workloads using a hybrid of residual and dedicated resources. These hybrid clusters offer significant gains over normal dedicated clusters: 20-40% cost and 20-29% energy savings in our test bed. For a given background job, RHIC intelligently discovers and maintains the ideal cluster size and composition, to meet user-specified goals such as cost/energy minimization or deadlines. RHIC employs black-box workload performance modeling, requiring only system-level metrics and incorporating techniques to improve modeling accuracy with bursty and heterogeneous residual resources. We demonstrate the effectiveness and adaptivity of our RHIC prototype with two parallel data analytics frameworks, Hadoop and HBase. Our results show that RHIC finds near-ideal cluster sizes and compositions across a wide range of workload/goal combinations.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129600968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last decade web content has evolved from relatively static pages often delivered by one or two servers, to websites rich with interactive media content served from numerous servers. This content change has affected the associated network traffic. Quantifying and analyzing these changes can lead to updated traffic models and more accurate web traffic simulations for testing new protocols and devices. In this work we analyze the TCP/IP headers in packet traces collected at various times over 13 years on the link that connects the University of North Carolina at Chapel Hill (UNC) to its ISP. We show that while the decade-old methodology for inferring web activity from these packet traces is still viable, it is no longer possible to infer all page boundaries given only the TCP and IP headers. We propose a novel method for segmenting web traffic into Activity Sections, in order to obtain comparable higher level statistics. Using these methods to analyze our data set, we describe trends in the HTTP request and response sizes, and a trend towards longer connection durations. We also show that the number of servers supporting web activity has increased, and present empirical evidence that suggests the number of unused connections has risen, likely due to new speculative TCP preconnect features of popular browsers.
{"title":"The Continued Evolution of Web Traffic","authors":"Ben Newton, K. Jeffay, Jay Aikat","doi":"10.1109/MASCOTS.2013.16","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.16","url":null,"abstract":"Over the last decade web content has evolved from relatively static pages often delivered by one or two servers, to websites rich with interactive media content served from numerous servers. This content change has affected the associated network traffic. Quantifying and analyzing these changes can lead to updated traffic models and more accurate web traffic simulations for testing new protocols and devices. In this work we analyze the TCP/IP headers in packet traces collected at various times over 13 years on the link that connects the University of North Carolina at Chapel Hill (UNC) to its ISP. We show that while the decade-old methodology for inferring web activity from these packet traces is still viable, it is no longer possible to infer all page boundaries given only the TCP and IP headers. We propose a novel method for segmenting web traffic into Activity Sections, in order to obtain comparable higher level statistics. Using these methods to analyze our data set, we describe trends in the HTTP request and response sizes, and a trend towards longer connection durations. We also show that the number of servers supporting web activity has increased, and present empirical evidence that suggests the number of unused connections has risen, likely due to new speculative TCP preconnect features of popular browsers.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130550422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zakwan Al-Arnaout, Jonathan Hart, Q. Fu, Marcus Frean
Wireless mesh networks (WMNs) were proposed to provide low-cost, easy deployment and robust access to the Internet. One of the objectives in WMNs is the improvement of client throughput, which can be achieved using content replication. Many replication schemes are specifically designed for the Internet. However, they do not account for the different characteristics of wireless networks, such as insufficient and fluctuating bandwidth, packet loss, contention to access the wireless medium, etc. In this paper, we study the problem of object replication and placement in WMNs, where mesh nodes act as replica servers in a P2P model that improves Quality of Experience by replicating content as close as possible to the requesting mesh clients. Furthermore, we aim to optimize the number of replicas per object to better utilize the storage capacity per node. In WMNs, the wireless link-quality is paramount in the placement decision and the measurement of the object access cost. Therefore, we propose a link-quality aware, distributed and scalable scheme for object replication. The proposed scheme exploits the long-term link-quality routing metrics to augment the replica placement decision and the instantaneous link-quality metrics for replica server selection. The simulation results show that our proposed scheme has better performance compared to other replication schemes.
{"title":"Link-Quality Aware Object Replication and Placement for Multi-hop Wireless Mesh Networks","authors":"Zakwan Al-Arnaout, Jonathan Hart, Q. Fu, Marcus Frean","doi":"10.1109/MASCOTS.2013.68","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.68","url":null,"abstract":"Wireless mesh networks (WMNs) were proposed to provide low-cost, easy deployment and robust access to the Internet. One of the objectives in WMNs is the improvement of client throughput, which can be achieved using content replication. Many replication schemes are specifically designed for the Internet. However, they do not account for the different characteristics of wireless networks, such as insufficient and fluctuating bandwidth, packet loss, contention to access the wireless medium, etc. In this paper, we study the problem of object replication and placement in WMNs, where mesh nodes act as replica servers in a P2P model that improves Quality of Experience by replicating content as close as possible to the requesting mesh clients. Furthermore, we aim to optimize the number of replicas per object to better utilize the storage capacity per node. In WMNs, the wireless link-quality is paramount in the placement decision and the measurement of the object access cost. Therefore, we propose a link-quality aware, distributed and scalable scheme for object replication. The proposed scheme exploits the long-term link-quality routing metrics to augment the replica placement decision and the instantaneous link-quality metrics for replica server selection. The simulation results show that our proposed scheme has better performance compared to other replication schemes.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"89 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131965396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A detection scheme of spectrum sensing is discussed for cognitive radio with multiple receive antennas operating over a wideband channel composed of a multitude of sub bands. By taking the observations in all sub bands into consideration in the likelihood ratio test for sensing a sub band, the proposed scheme can provide better performance than other conventional schemes.
{"title":"Spectrum Sensing with Receive Diversity for Cognitive Radio Operating over Wideband Channel","authors":"Taehun An, I. Song, Seungwon Lee, Hwang-Ki Min","doi":"10.1109/MASCOTS.2013.48","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.48","url":null,"abstract":"A detection scheme of spectrum sensing is discussed for cognitive radio with multiple receive antennas operating over a wideband channel composed of a multitude of sub bands. By taking the observations in all sub bands into consideration in the likelihood ratio test for sensing a sub band, the proposed scheme can provide better performance than other conventional schemes.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121199347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}