Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566981
Seon-Yeong Han, N. Abu-Ghazaleh, Dongman Lee
The accuracy of wireless network packet simulation critically depends on the quality of the wireless channel models. These models directly affect the fundamental network characteristics, such as link quality, transmission range, and capture effect, as well as their dynamic variation in time and space. Path loss is the stationary component of the channel model affected by the shadowing in the environment. Existing path loss models are inaccurate, require very high measurement or computational overhead, and/or often cannot be made to represent a given environment. The paper contributes a flexible path loss model that uses a novel approach for spatially coherent interpolation from available nearby channels to allow accurate and efficient modeling of path loss. We show that the proposed model, called Double Regression (DR), generates a correlated space, allowing both the sender and the receiver to move without abrupt change in path loss. Combining DR with a traditional temporal fading model, such as Rayleigh fading, provides an accurate and efficient channel model that we integrate with the NS-2 simulator. We use measurements to validate the accuracy of the model for a number of scenarios. We also show that there is substantial impact on simulation behavior (e.g., up to 600% difference in throughput for simple scenarios) when path loss is modeled accurately.
{"title":"Double Regression: Efficient spatially correlated path loss model for wireless network simulation","authors":"Seon-Yeong Han, N. Abu-Ghazaleh, Dongman Lee","doi":"10.1109/INFCOM.2013.6566981","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566981","url":null,"abstract":"The accuracy of wireless network packet simulation critically depends on the quality of the wireless channel models. These models directly affect the fundamental network characteristics, such as link quality, transmission range, and capture effect, as well as their dynamic variation in time and space. Path loss is the stationary component of the channel model affected by the shadowing in the environment. Existing path loss models are inaccurate, require very high measurement or computational overhead, and/or often cannot be made to represent a given environment. The paper contributes a flexible path loss model that uses a novel approach for spatially coherent interpolation from available nearby channels to allow accurate and efficient modeling of path loss. We show that the proposed model, called Double Regression (DR), generates a correlated space, allowing both the sender and the receiver to move without abrupt change in path loss. Combining DR with a traditional temporal fading model, such as Rayleigh fading, provides an accurate and efficient channel model that we integrate with the NS-2 simulator. We use measurements to validate the accuracy of the model for a number of scenarios. We also show that there is substantial impact on simulation behavior (e.g., up to 600% difference in throughput for simple scenarios) when path loss is modeled accurately.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124941077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566850
M. Alicherry, T. V. Lakshman
Many cloud applications are data intensive requiring the processing of large data sets and the MapReduce/Hadoop architecture has become the de facto processing framework for these applications. Large data sets are stored in data nodes in the cloud which are typically SAN or NAS devices. Cloud applications process these data sets using a large number of application virtual machines (VMs), with the total completion time being an important performance metric. There are many factors that affect the total completion time of the processing task such as the load on the individual servers, the task scheduling mechanism, communication and data access bottlenecks, etc. One dominating factor that affects completion times for data intensive applications is the access latencies from processing nodes to data nodes. Ideally, one would like to keep all data access local to minimize access latency but this is often not possible due to the size of the data sets, capacity constraints in processing nodes which constrain VMs from being placed in their ideal location and so on. When it is not possible to keep all data access local, one would like to optimize the placement of VMs so that the impact of data access latencies on completion times is minimized. We address this problem of optimized VM placement - given the location of the data sets, we need to determine the locations for placing the VMs so as to minimize data access latencies while satisfying system constraints. We present optimal algorithms for determining the VM locations satisfying various constraints and with objectives that capture natural tradeoffs between minimizing latencies and incurring bandwidth costs. We also consider the problem of incorporating inter-VM latency constraints. In this case, the associated location problem is NP-hard with no effective approximation within a factor of 2 - ϵ for any ϵ > 0. We discuss an effective heuristic for this case and evaluate by simulation the impact of the various tradeoffs in the optimization objectives.
{"title":"Optimizing data access latencies in cloud systems by intelligent virtual machine placement","authors":"M. Alicherry, T. V. Lakshman","doi":"10.1109/INFCOM.2013.6566850","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566850","url":null,"abstract":"Many cloud applications are data intensive requiring the processing of large data sets and the MapReduce/Hadoop architecture has become the de facto processing framework for these applications. Large data sets are stored in data nodes in the cloud which are typically SAN or NAS devices. Cloud applications process these data sets using a large number of application virtual machines (VMs), with the total completion time being an important performance metric. There are many factors that affect the total completion time of the processing task such as the load on the individual servers, the task scheduling mechanism, communication and data access bottlenecks, etc. One dominating factor that affects completion times for data intensive applications is the access latencies from processing nodes to data nodes. Ideally, one would like to keep all data access local to minimize access latency but this is often not possible due to the size of the data sets, capacity constraints in processing nodes which constrain VMs from being placed in their ideal location and so on. When it is not possible to keep all data access local, one would like to optimize the placement of VMs so that the impact of data access latencies on completion times is minimized. We address this problem of optimized VM placement - given the location of the data sets, we need to determine the locations for placing the VMs so as to minimize data access latencies while satisfying system constraints. We present optimal algorithms for determining the VM locations satisfying various constraints and with objectives that capture natural tradeoffs between minimizing latencies and incurring bandwidth costs. We also consider the problem of incorporating inter-VM latency constraints. In this case, the associated location problem is NP-hard with no effective approximation within a factor of 2 - ϵ for any ϵ > 0. We discuss an effective heuristic for this case and evaluate by simulation the impact of the various tradeoffs in the optimization objectives.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125056671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566913
Yaoqing Liu, Beichuan Zhang, Lan Wang
The fast growth of global routing table size has been causing concerns that the Forwarding Information Base (FIB) will not be able to fit in existing routers' expensive line-card memory, and upgrades will lead to higher cost for network operators and customers. FIB Aggregation, a technique that merges multiple FIB entries into one, is probably the most practical solution since it is a software solution local to a router, and does not require any changes to routing protocols or network operations. While previous work on FIB aggregation mostly focuses on reducing table size, this work focuses on algorithms that can update compressed FIBs quickly and incrementally. Quick update is critical to routers because they have very limited time to process routing updates without impacting packet delivery performance. We have designed three algorithms: FIFA-S for smallest table size, FIFA-T for shortest running time, and FIFA-H for both small tables and short running time, and operators can use the one best suited to their needs. These algorithms significantly improve over existing work in terms of reducing routers' computation overhead and limiting impact on the forwarding plane while maintaining a good compression ratio.
{"title":"FIFA: Fast incremental FIB aggregation","authors":"Yaoqing Liu, Beichuan Zhang, Lan Wang","doi":"10.1109/INFCOM.2013.6566913","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566913","url":null,"abstract":"The fast growth of global routing table size has been causing concerns that the Forwarding Information Base (FIB) will not be able to fit in existing routers' expensive line-card memory, and upgrades will lead to higher cost for network operators and customers. FIB Aggregation, a technique that merges multiple FIB entries into one, is probably the most practical solution since it is a software solution local to a router, and does not require any changes to routing protocols or network operations. While previous work on FIB aggregation mostly focuses on reducing table size, this work focuses on algorithms that can update compressed FIBs quickly and incrementally. Quick update is critical to routers because they have very limited time to process routing updates without impacting packet delivery performance. We have designed three algorithms: FIFA-S for smallest table size, FIFA-T for shortest running time, and FIFA-H for both small tables and short running time, and operators can use the one best suited to their needs. These algorithms significantly improve over existing work in terms of reducing routers' computation overhead and limiting impact on the forwarding plane while maintaining a good compression ratio.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131647685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOMW.2013.6562866
S. Nambi, Thanasis G. Papaioannou, D. Chakraborty, K. Aberer
The continuous growth of energy needs and the fact that unpredictable energy demand is mostly served by unsustainable (i.e. fossil-fuel) power generators have given rise to the development of Demand Response (DR) mechanisms for flattening energy demand. Building effective DR mechanisms and user awareness on power consumption can significantly benefit from fine-grained monitoring of user consumption at the appliance level. However, installing and maintaining such a monitoring infrastructure in residential settings can be quite expensive. In this paper, we study the problem of fine-grained appliance power-consumption monitoring based on one house-level meter and few plug-level meters. We explore the trade-off between monitoring accuracy and cost, and exhaustively find the minimum subset of plug-level meters that maximize accuracy. As exhaustive search is time- and resource-consuming, we define a heuristic approach that finds the optimal set of plug-level meters without utilizing any other sets of plug-level meters. Based on experiments with real data, we found that few plug-level meters - when appropriately placed - can very accurately disaggregate the total real power consumption of a residential setting and verified the effectiveness of our heuristic approach.
{"title":"Sustainable energy consumption monitoring in residential settings","authors":"S. Nambi, Thanasis G. Papaioannou, D. Chakraborty, K. Aberer","doi":"10.1109/INFCOMW.2013.6562866","DOIUrl":"https://doi.org/10.1109/INFCOMW.2013.6562866","url":null,"abstract":"The continuous growth of energy needs and the fact that unpredictable energy demand is mostly served by unsustainable (i.e. fossil-fuel) power generators have given rise to the development of Demand Response (DR) mechanisms for flattening energy demand. Building effective DR mechanisms and user awareness on power consumption can significantly benefit from fine-grained monitoring of user consumption at the appliance level. However, installing and maintaining such a monitoring infrastructure in residential settings can be quite expensive. In this paper, we study the problem of fine-grained appliance power-consumption monitoring based on one house-level meter and few plug-level meters. We explore the trade-off between monitoring accuracy and cost, and exhaustively find the minimum subset of plug-level meters that maximize accuracy. As exhaustive search is time- and resource-consuming, we define a heuristic approach that finds the optimal set of plug-level meters without utilizing any other sets of plug-level meters. Based on experiments with real data, we found that few plug-level meters - when appropriately placed - can very accurately disaggregate the total real power consumption of a residential setting and verified the effectiveness of our heuristic approach.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133837890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566935
Hengchang Liu, Shaohan Hu, Wei Zheng, Zhiheng Xie, Shiguang Wang, P. Hui, T. Abdelzaher
This paper explores efficient 3G budget utilization in mobile participatory sensing applications. 1 Distinct from previous research work that either rely on limited WiFi access points or assume the availability of unlimited 3G communication capability, we offer a more practical participatory sensing system that leverages potential 3G budgets that participants contribute at will, and uses it efficiently customized for the needs of multiple participatory sensing applications with heterogeneous sensitivity to environmental changes. We address the challenge that the information of data generation and WiFi encounters is not a priori knowledge, and propose an online decision making algorithm that takes advantage of participants' historical data. We also develop a heuristic algorithm to consume less energy and reduce the storage overhead while maintaining efficient 3G budget utilization. Experimental results from a 30-participant deployment demonstrate that, even when the budget is as small as 2.5% of a popular data plan, these two algorithms achieve higher utility of uploaded data compared to the baseline solution, especially, they increase the utility of received data by 151.4% and 137.8% for those sensitive applications.
{"title":"Efficient 3G budget utilization in mobile participatory sensing applications","authors":"Hengchang Liu, Shaohan Hu, Wei Zheng, Zhiheng Xie, Shiguang Wang, P. Hui, T. Abdelzaher","doi":"10.1109/INFCOM.2013.6566935","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566935","url":null,"abstract":"This paper explores efficient 3G budget utilization in mobile participatory sensing applications. 1 Distinct from previous research work that either rely on limited WiFi access points or assume the availability of unlimited 3G communication capability, we offer a more practical participatory sensing system that leverages potential 3G budgets that participants contribute at will, and uses it efficiently customized for the needs of multiple participatory sensing applications with heterogeneous sensitivity to environmental changes. We address the challenge that the information of data generation and WiFi encounters is not a priori knowledge, and propose an online decision making algorithm that takes advantage of participants' historical data. We also develop a heuristic algorithm to consume less energy and reduce the storage overhead while maintaining efficient 3G budget utilization. Experimental results from a 30-participant deployment demonstrate that, even when the budget is as small as 2.5% of a popular data plan, these two algorithms achieve higher utility of uploaded data compared to the baseline solution, especially, they increase the utility of received data by 151.4% and 137.8% for those sensitive applications.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133964703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6567058
Xinlei Wang, W. Cheng, P. Mohapatra, T. Abdelzaher
With the proliferation of sensor-embedded mobile computing devices, participatory sensing is becoming popular to collect information from and outsource tasks to participating users. These applications deal with a lot of personal information, e.g., users' identities and locations at a specific time. Therefore, we need to pay a deeper attention to privacy and anonymity. However, from a data consumer's point of view, we want to know the source of the sensing data, i.e., the identity of the sender, in order to evaluate how much the data can be trusted. “Anonymity” and “trust” are two conflicting objectives in participatory sensing networks, and there are no existing research efforts which investigated the possibility of achieving both of them at the same time. In this paper, we propose ARTSense, a framework to solve the problem of “trust without identity” in participatory sensing networks. Our solution consists of a privacy-preserving provenance model, a data trust assessment scheme and an anonymous reputation management protocol. We have shown that ARTSense achieves the anonymity and security requirements. Validations are done to show that we can capture the trust of information and reputation of participants accurately.
{"title":"ARTSense: Anonymous reputation and trust in participatory sensing","authors":"Xinlei Wang, W. Cheng, P. Mohapatra, T. Abdelzaher","doi":"10.1109/INFCOM.2013.6567058","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6567058","url":null,"abstract":"With the proliferation of sensor-embedded mobile computing devices, participatory sensing is becoming popular to collect information from and outsource tasks to participating users. These applications deal with a lot of personal information, e.g., users' identities and locations at a specific time. Therefore, we need to pay a deeper attention to privacy and anonymity. However, from a data consumer's point of view, we want to know the source of the sensing data, i.e., the identity of the sender, in order to evaluate how much the data can be trusted. “Anonymity” and “trust” are two conflicting objectives in participatory sensing networks, and there are no existing research efforts which investigated the possibility of achieving both of them at the same time. In this paper, we propose ARTSense, a framework to solve the problem of “trust without identity” in participatory sensing networks. Our solution consists of a privacy-preserving provenance model, a data trust assessment scheme and an anonymous reputation management protocol. We have shown that ARTSense achieves the anonymity and security requirements. Validations are done to show that we can capture the trust of information and reputation of participants accurately.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122312911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6567162
Yan Shvartzshnaider, M. Ott, O. Mehani, Guillaume Jourjon, T. Rakotoarivelo, D. Levy
In this paper, we introduce the Moana network infrastructure. It draws on well-adopted practices from the database and software engineering communities to provide a robust and expressive information-sharing service using hypergraph-based network indirection. Our proposal is twofold. First, we argue for the need for additional layers of indirection used in modern information systems to bring the network layer abstraction closer to the developer's world, allowing for expressiveness and flexibility in the creation of future services. Second, we present a modular and extensible design of the network fabric to support incremental architectural evolution and innovation, as well as its initial evaluation.
{"title":"Into the Moana1 — Hypergraph-based network layer indirection","authors":"Yan Shvartzshnaider, M. Ott, O. Mehani, Guillaume Jourjon, T. Rakotoarivelo, D. Levy","doi":"10.1109/INFCOM.2013.6567162","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6567162","url":null,"abstract":"In this paper, we introduce the Moana network infrastructure. It draws on well-adopted practices from the database and software engineering communities to provide a robust and expressive information-sharing service using hypergraph-based network indirection. Our proposal is twofold. First, we argue for the need for additional layers of indirection used in modern information systems to bring the network layer abstraction closer to the developer's world, allowing for expressiveness and flexibility in the creation of future services. Second, we present a modular and extensible design of the network fabric to support incremental architectural evolution and innovation, as well as its initial evaluation.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"77 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114165308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6567038
G. Paschos, Constantinos Fragiadakis, L. Georgiadis, L. Tassiulas
We study an 1-hop broadcast channel with two receivers. Due to overhearing channels, the receivers have side information which can be leveraged by interflow network coding techniques to provide throughput increase. In this setup, we consider two different control mechanisms, the deterministic system, where the contents of the receivers' buffers are announced to the coding node via overhearing reports and the stochastic system, where the coding node makes stochastic control decisions based on statistics and the performance is improved via NACK messages. We study the minimal evacuation times for the two systems and obtain analytical expressions of the throughput region for the deterministic and the code-constrained region for the stochastic. We show that maximum performance is achieved by simple XOR policies. For equal transmission rates r1 = r2, the two regions are equal. If r1 ≠ r2, we showcase the tradeoff between throughput and overhead.
{"title":"Wireless network coding with partial overhearing information","authors":"G. Paschos, Constantinos Fragiadakis, L. Georgiadis, L. Tassiulas","doi":"10.1109/INFCOM.2013.6567038","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6567038","url":null,"abstract":"We study an 1-hop broadcast channel with two receivers. Due to overhearing channels, the receivers have side information which can be leveraged by interflow network coding techniques to provide throughput increase. In this setup, we consider two different control mechanisms, the deterministic system, where the contents of the receivers' buffers are announced to the coding node via overhearing reports and the stochastic system, where the coding node makes stochastic control decisions based on statistics and the performance is improved via NACK messages. We study the minimal evacuation times for the two systems and obtain analytical expressions of the throughput region for the deterministic and the code-constrained region for the stochastic. We show that maximum performance is achieved by simple XOR policies. For equal transmission rates r1 = r2, the two regions are equal. If r1 ≠ r2, we showcase the tradeoff between throughput and overhead.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121484703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOMW.2013.6562867
U. Montanari, Alain Tcheukam Siwe
Decentralized power management systems will play a key role in reducing greenhouse gas emissions and increasing electricity production through alternative energy sources. In this paper, we focus on power market models in which prosumers interact in a distributed environment during the purchase or sale of electric power. We have chosen to follow the distributed power market model DEZENT. Our contribution is the planning phase of the consumption of prosumers based on the negotiation mechanism of DEZENT. We propose a controller for the planning of the consumption which aims at minimizing the electricity cost achieved at the end of a day. In the paper we discuss the assumptions on which the controller design is based.
{"title":"Real time market models and prosumer profiling","authors":"U. Montanari, Alain Tcheukam Siwe","doi":"10.1109/INFCOMW.2013.6562867","DOIUrl":"https://doi.org/10.1109/INFCOMW.2013.6562867","url":null,"abstract":"Decentralized power management systems will play a key role in reducing greenhouse gas emissions and increasing electricity production through alternative energy sources. In this paper, we focus on power market models in which prosumers interact in a distributed environment during the purchase or sale of electric power. We have chosen to follow the distributed power market model DEZENT. Our contribution is the planning phase of the consumption of prosumers based on the negotiation mechanism of DEZENT. We propose a controller for the planning of the consumption which aims at minimizing the electricity cost achieved at the end of a day. In the paper we discuss the assumptions on which the controller design is based.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116235087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566899
Lingjie Duan, Jianwei Huang, Biying Shou
This paper analyzes two pricing schemes commonly used in WiFi markets: flat-rate pricing and usage-based pricing. The flat-free pricing encourages users to achieve the maximum WiFi usage and targets at users with high valuations in mobile Internet access, whereas the usage-based pricing is flexible to attract more users - even those with low valuations. First, we show that for a local provider, the flat-rate pricing provides more revenue than the usage-based pricing, which is consistent with the common practice in today's local markets. Second, we study how Skype may work with many local WiFi providers to provide a global WiFi service. We formulate the interactions between Skype, local providers, and users as a two-stage dynamic game. In Stage I, Skype bargains with each local provider to determine the global Skype WiFi service price and revenue sharing agreement; in Stage II, local users and travelers decide whether and how to use local or Skype WiFi service. Our analysis discovers two key insights behind Skype's current choice of usage-based pricing for its global WiFi service: to avoid severe competition with local providers and attract travelers to the service. We further show that at the equilibrium, Skype needs to share the majority of his revenue with a local provider to compensate the local provider's revenue loss due to competition. When there are more travelers or fewer local users, the competition between Skype and a local provider becomes less severe, and Skype can give away less revenue and reduce its usage-based price to attract more users.
{"title":"Optimal pricing for local and global WiFi markets","authors":"Lingjie Duan, Jianwei Huang, Biying Shou","doi":"10.1109/INFCOM.2013.6566899","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566899","url":null,"abstract":"This paper analyzes two pricing schemes commonly used in WiFi markets: flat-rate pricing and usage-based pricing. The flat-free pricing encourages users to achieve the maximum WiFi usage and targets at users with high valuations in mobile Internet access, whereas the usage-based pricing is flexible to attract more users - even those with low valuations. First, we show that for a local provider, the flat-rate pricing provides more revenue than the usage-based pricing, which is consistent with the common practice in today's local markets. Second, we study how Skype may work with many local WiFi providers to provide a global WiFi service. We formulate the interactions between Skype, local providers, and users as a two-stage dynamic game. In Stage I, Skype bargains with each local provider to determine the global Skype WiFi service price and revenue sharing agreement; in Stage II, local users and travelers decide whether and how to use local or Skype WiFi service. Our analysis discovers two key insights behind Skype's current choice of usage-based pricing for its global WiFi service: to avoid severe competition with local providers and attract travelers to the service. We further show that at the equilibrium, Skype needs to share the majority of his revenue with a local provider to compensate the local provider's revenue loss due to competition. When there are more travelers or fewer local users, the competition between Skype and a local provider becomes less severe, and Skype can give away less revenue and reduce its usage-based price to attract more users.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116442897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}