Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928802
Zalán Heszberger, János Tapolcai, A. Gulyás, J. Bíró, A. Zahemszky, P. Ho
In-packet Bloom filters are recently proposed as a possible building block of future Internet architectures replacing IP or MPLS addressing that solves efficient multicast routing, security and other functions in a stateless manner. In such frameworks a bloom filter is placed in the header which stores the addresses of the destination nodes or the traversed links. In contrast to the standard Bloom filter, the length of the in-packet Bloom filter must be highly adaptive to the number of stored elements to achieve low communication overhead. In this paper we propose a novel type of Bloom filter called Adaptive Bloom filter, which can adapt its length to the number of elements to be represented with a very fine granularity. The novel filter can significantly reduce the header size for in-packet bloom filter architecture, by eliminating the wasting effect experienced in existing “block-based” approaches which rely on concatenating several standard Bloom filters. Nevertheless, it requires slightly more calculations when adding and removing elements.
{"title":"Adaptive Bloom filters for multicast addressing","authors":"Zalán Heszberger, János Tapolcai, A. Gulyás, J. Bíró, A. Zahemszky, P. Ho","doi":"10.1109/INFCOMW.2011.5928802","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928802","url":null,"abstract":"In-packet Bloom filters are recently proposed as a possible building block of future Internet architectures replacing IP or MPLS addressing that solves efficient multicast routing, security and other functions in a stateless manner. In such frameworks a bloom filter is placed in the header which stores the addresses of the destination nodes or the traversed links. In contrast to the standard Bloom filter, the length of the in-packet Bloom filter must be highly adaptive to the number of stored elements to achieve low communication overhead. In this paper we propose a novel type of Bloom filter called Adaptive Bloom filter, which can adapt its length to the number of elements to be represented with a very fine granularity. The novel filter can significantly reduce the header size for in-packet bloom filter architecture, by eliminating the wasting effect experienced in existing “block-based” approaches which rely on concatenating several standard Bloom filters. Nevertheless, it requires slightly more calculations when adding and removing elements.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125186633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928954
Zhushou Tang, Haojin Zhu, Z. Cao, Shuai Zhao
XSS (Cross-Site Scripting) is a major security threat for web applications. Due to lack of source code of web application, fuzz technique has become a popular approach to discover XSS in web application except Webmail. This paper proposes a Webmail XSS fuzzer called L-WMxD (Lexical based Webmail XSS Discoverer). L-WMxD , which works on a lexical based mutation engine, is an active defense system to discover XSS before the Webmail application is online for service. The engine is initialized by normal JavaScript code called seed. Then, rules are applied to the sensitive strings in the seed which are picked out through a lexical parser. After that, the mutation engine issues multiple test cases. Newly-generated test cases are used for XSS test. Two prototype tools are realized by us to send the newly-generated test cases to various Webmail servers to discover XSS vulnerability. Experimental results of L-WMxD are quite encouraging. We have run L-WMxD over 26 real-world Webmail applications and found vulnerabilities in 21 Webmail services, including some of the most widely used Yahoo!Mail, Mirapoint Webmail and ORACLE' Collaboration Suite Mail.
跨站点脚本(XSS)是web应用程序的主要安全威胁。由于web应用程序缺乏源代码,模糊技术已成为除Webmail外的web应用程序中发现跨站攻击的常用方法。本文提出了一种基于词法的Webmail跨站探测器L-WMxD (Lexical based Webmail跨站探测器)。L-WMxD工作在基于词法的突变引擎上,是一个主动防御系统,可以在Webmail应用程序在线服务之前发现XSS。该引擎由称为seed的普通JavaScript代码初始化。然后,规则应用于种子中的敏感字符串,这些字符串是通过词法解析器挑选出来的。之后,突变引擎发出多个测试用例。新生成的测试用例用于XSS测试。我们实现了两种原型工具,将新生成的测试用例发送到各种Webmail服务器,以发现跨站攻击漏洞。L-WMxD的实验结果令人鼓舞。我们在26个真实的Webmail应用程序上运行了L-WMxD,并在21个Webmail服务中发现了漏洞,其中包括一些最广泛使用的Yahoo!邮件,Mirapoint Webmail和ORACLE的协作套件邮件。
{"title":"L-WMxD: Lexical based Webmail XSS Discoverer","authors":"Zhushou Tang, Haojin Zhu, Z. Cao, Shuai Zhao","doi":"10.1109/INFCOMW.2011.5928954","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928954","url":null,"abstract":"XSS (Cross-Site Scripting) is a major security threat for web applications. Due to lack of source code of web application, fuzz technique has become a popular approach to discover XSS in web application except Webmail. This paper proposes a Webmail XSS fuzzer called L-WMxD (Lexical based Webmail XSS Discoverer). L-WMxD , which works on a lexical based mutation engine, is an active defense system to discover XSS before the Webmail application is online for service. The engine is initialized by normal JavaScript code called seed. Then, rules are applied to the sensitive strings in the seed which are picked out through a lexical parser. After that, the mutation engine issues multiple test cases. Newly-generated test cases are used for XSS test. Two prototype tools are realized by us to send the newly-generated test cases to various Webmail servers to discover XSS vulnerability. Experimental results of L-WMxD are quite encouraging. We have run L-WMxD over 26 real-world Webmail applications and found vulnerabilities in 21 Webmail services, including some of the most widely used Yahoo!Mail, Mirapoint Webmail and ORACLE' Collaboration Suite Mail.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122990011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928860
S. Hiyama, Y. Moritani, K. Kuribayashi-Shigetomi, H. Onoe, S. Takeuchi
We aimed to create a method in micropatterning of different kinds of biomaterials as a platform of a molecular communication system onto a single substrate. This paper proposes a multiple poly(para-xylylene) (parylene) simultaneous peel-off process and demonstrates that different kinds of proteins and DNAs were successfully microarrayed onto the single substrate. Further, the functionalities and the contamination-free nature of these microarrayed biomaterials were maintained throughout the micropatterning process. Our results contribute to the development of microarrayed senders and receivers such as DNA-tagged vesicles and/or biological cells in molecular communication, and will help to investigate and visualize the overall of molecular communication processes.
{"title":"Micropatterning of different kinds of biomaterials as a platform of a molecular communication system","authors":"S. Hiyama, Y. Moritani, K. Kuribayashi-Shigetomi, H. Onoe, S. Takeuchi","doi":"10.1109/INFCOMW.2011.5928860","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928860","url":null,"abstract":"We aimed to create a method in micropatterning of different kinds of biomaterials as a platform of a molecular communication system onto a single substrate. This paper proposes a multiple poly(para-xylylene) (parylene) simultaneous peel-off process and demonstrates that different kinds of proteins and DNAs were successfully microarrayed onto the single substrate. Further, the functionalities and the contamination-free nature of these microarrayed biomaterials were maintained throughout the micropatterning process. Our results contribute to the development of microarrayed senders and receivers such as DNA-tagged vesicles and/or biological cells in molecular communication, and will help to investigate and visualize the overall of molecular communication processes.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131499141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928833
V. Sivaraman, A. Vishwanath, Zhi Zhao, Craig Russell
Improving energy efficiency of Internet equipment is becoming an increasingly important research topic, motivated by the need to reduce energy costs (and Carbon footprint) for Internet Service Providers, as well as increase power density to achieve more switching capacity per-rack. While recent research has profiled the power consumption of commercial routing equipment, these profiles are coarse-grained (i.e., at the granularity of per line-card or per port), and moreover such platforms are inflexible for experimentation with new energy-saving mechanisms. In this paper we therefore consider the NetFPGA platform, which is becoming an increasingly popular routing platform for networking research due to its versatility and low-cost. Using a precise hardware-based traffic generator and high-fidelity energy probe, we conduct several experiments that allow us to decompose the energy consumption of the NetFPGA routing card into fine-grained per-packet and per-byte components with reasonable accuracy. Our quantification of energy consumption on this platform opens the doors for estimating network-wide energy footprints at the granularity of traffic sessions and applications (e.g., due to TCP file transfers), and provides a benchmark against which energy improvements arising from new architectures and protocols can be evaluated.
{"title":"Profiling per-packet and per-byte energy consumption in the NetFPGA Gigabit router","authors":"V. Sivaraman, A. Vishwanath, Zhi Zhao, Craig Russell","doi":"10.1109/INFCOMW.2011.5928833","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928833","url":null,"abstract":"Improving energy efficiency of Internet equipment is becoming an increasingly important research topic, motivated by the need to reduce energy costs (and Carbon footprint) for Internet Service Providers, as well as increase power density to achieve more switching capacity per-rack. While recent research has profiled the power consumption of commercial routing equipment, these profiles are coarse-grained (i.e., at the granularity of per line-card or per port), and moreover such platforms are inflexible for experimentation with new energy-saving mechanisms. In this paper we therefore consider the NetFPGA platform, which is becoming an increasingly popular routing platform for networking research due to its versatility and low-cost. Using a precise hardware-based traffic generator and high-fidelity energy probe, we conduct several experiments that allow us to decompose the energy consumption of the NetFPGA routing card into fine-grained per-packet and per-byte components with reasonable accuracy. Our quantification of energy consumption on this platform opens the doors for estimating network-wide energy footprints at the granularity of traffic sessions and applications (e.g., due to TCP file transfers), and provides a benchmark against which energy improvements arising from new architectures and protocols can be evaluated.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128052070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928832
A. Cianfrani, V. Eramo, M. Listanti, Marco Polverini
This paper deals with a strategy to save energy in an IP network during low traffic hours allowing a subset of IP router interfaces to be put in sleep mode by means of an Energy Aware Routing (EAR) strategy. The EAR is fully compatible with OSPF and is based on the “Shortest Path Tree (SPT) exportation” mechanism, consisting in sharing the SPTs among couple of routers. The EAR strategy is able to control the set of links to be put in sleep mode through the concept of “move”. This approach gives the network operator the possibility to control the network performance and allows a smoothed QoS degradation strategy to be implemented. A formulation of the EAR problem is presented and will be demonstrated that this problem can be traced back to the well-known problem of the maximum clique search in an undirected weighted graph. A heuristics, called Max Compatibility, is presented and, as shown in the performance evaluation study, it allows to save about 30% of network links with a negligible increase of network path lengths and link loads.
{"title":"An OSPF enhancement for energy saving in IP networks","authors":"A. Cianfrani, V. Eramo, M. Listanti, Marco Polverini","doi":"10.1109/INFCOMW.2011.5928832","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928832","url":null,"abstract":"This paper deals with a strategy to save energy in an IP network during low traffic hours allowing a subset of IP router interfaces to be put in sleep mode by means of an Energy Aware Routing (EAR) strategy. The EAR is fully compatible with OSPF and is based on the “Shortest Path Tree (SPT) exportation” mechanism, consisting in sharing the SPTs among couple of routers. The EAR strategy is able to control the set of links to be put in sleep mode through the concept of “move”. This approach gives the network operator the possibility to control the network performance and allows a smoothed QoS degradation strategy to be implemented. A formulation of the EAR problem is presented and will be demonstrated that this problem can be traced back to the well-known problem of the maximum clique search in an undirected weighted graph. A heuristics, called Max Compatibility, is presented and, as shown in the performance evaluation study, it allows to save about 30% of network links with a negligible increase of network path lengths and link loads.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133183569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928926
Tobias Limmer, F. Dressler
We propose Dialog-based Payload Aggregation (DPA) that extracts relevant payload data from TCP/IP packet streams based on sequence numbers in the TCP header for improved intrusion detection performance. Typical network-based Intrusion Detection Systems (IDSs) like Snort, which use rules for matching payload data, show severe performance problems in high-speed networks. Our detailed analysis based on live network traffic reveals that most of the signature matches either occur at the beginning of TCP connections or directly after direction changes in the data streams. Our DPA approach exploits protocol semantics intrinsic to bidirectional communication, i.e., most application layer protocols rely on requests and associated responses with a direction change in the data stream in between. DPA forwards the next N bytes of payload whenever a connection starts, or when the direction of the data transmission changes. All data transferred after this window is discarded. According to experimental results, our method reduces the amount of data to be analyzed at the IDS to around 3:7% for typical network traffic. At the same time, more than 89% of all potential events can be detected. Assuming a linear relationship between data rate and processing time of an IDS, this results in a speedup of more than one order of magnitude in the best case. Our performance analysis that combines DPA with Snort shows a 400% increase in packet processing throughput on commodity hardware.
{"title":"Improving the performance of intrusion detection using Dialog-based Payload Aggregation","authors":"Tobias Limmer, F. Dressler","doi":"10.1109/INFCOMW.2011.5928926","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928926","url":null,"abstract":"We propose Dialog-based Payload Aggregation (DPA) that extracts relevant payload data from TCP/IP packet streams based on sequence numbers in the TCP header for improved intrusion detection performance. Typical network-based Intrusion Detection Systems (IDSs) like Snort, which use rules for matching payload data, show severe performance problems in high-speed networks. Our detailed analysis based on live network traffic reveals that most of the signature matches either occur at the beginning of TCP connections or directly after direction changes in the data streams. Our DPA approach exploits protocol semantics intrinsic to bidirectional communication, i.e., most application layer protocols rely on requests and associated responses with a direction change in the data stream in between. DPA forwards the next N bytes of payload whenever a connection starts, or when the direction of the data transmission changes. All data transferred after this window is discarded. According to experimental results, our method reduces the amount of data to be analyzed at the IDS to around 3:7% for typical network traffic. At the same time, more than 89% of all potential events can be detected. Assuming a linear relationship between data rate and processing time of an IDS, this results in a speedup of more than one order of magnitude in the best case. Our performance analysis that combines DPA with Snort shows a 400% increase in packet processing throughput on commodity hardware.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133532046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928783
H. Kung, Tsung-Han Lin, D. Vlah
We consider the problem of identifying bad measurements in compressive sensing. These bad measurements can be present due to malicious attacks and system malfunction. Since the system of linear equations in compressive sensing is underconstrained, errors introduced by these bad measurements can result in large changes in decoded solutions. We describe methods for identifying bad measurements so that they can be removed before decoding. In a new separation-based method we separate out top nonzero variables by ranking, eliminate the remaining variables from the system of equations, and then solve the reduced overconstrained problem to identify bad measurements. Comparing to prior methods based on direct or joint ℓ1-minimization, the separation-based method can work under a much smaller number of measurements. In analyzing the method we introduce the notion of inversions which governs the separability of large nonzero variables.
{"title":"Identifying bad measurements in compressive sensing","authors":"H. Kung, Tsung-Han Lin, D. Vlah","doi":"10.1109/INFCOMW.2011.5928783","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928783","url":null,"abstract":"We consider the problem of identifying bad measurements in compressive sensing. These bad measurements can be present due to malicious attacks and system malfunction. Since the system of linear equations in compressive sensing is underconstrained, errors introduced by these bad measurements can result in large changes in decoded solutions. We describe methods for identifying bad measurements so that they can be removed before decoding. In a new separation-based method we separate out top nonzero variables by ranking, eliminate the remaining variables from the system of equations, and then solve the reduced overconstrained problem to identify bad measurements. Comparing to prior methods based on direct or joint ℓ1-minimization, the separation-based method can work under a much smaller number of measurements. In analyzing the method we introduce the notion of inversions which governs the separability of large nonzero variables.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130326065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928854
N. Garralda, I. Llatser, A. Cabellos-Aparicio, M. Pierobon
Nanonetworking is an emerging field of research, where nanotechnology and communication engineering are applied on a common ground. Molecular Communication (MC) is a bio-inspired paradigm, where Nanonetworks, i.e., the interconnection of devices at the nanoscale, are based on the exchange of molecules. Amongst others, diffusion-based MC is expected to be suitable for covering short distances (nm-µm). In this work, we explore the main characteristics of diffusion-based MC through the use of N3Sim, a physical simulation framework for MC. N3Sim allows for the simulation of the physics underlying the diffusion of molecules for different scenarios. Through the N3Sim results, the Linear Time Invariant (LTI) property is proven to be a valid assumption for the free diffusion-based MC scenario. Moreover, diffusion-based noise is observed and evaluated with reference to already proposed stochastic models. The optimal pulse shape for diffusion-based MC is provided as a result of simulations. Two different pulse-based coding techniques are also compared through N3Sim in terms of available bandwidth and energy consumption for communication.
{"title":"Simulation-based evaluation of the diffusion-based physical channel in molecular nanonetworks","authors":"N. Garralda, I. Llatser, A. Cabellos-Aparicio, M. Pierobon","doi":"10.1109/INFCOMW.2011.5928854","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928854","url":null,"abstract":"Nanonetworking is an emerging field of research, where nanotechnology and communication engineering are applied on a common ground. Molecular Communication (MC) is a bio-inspired paradigm, where Nanonetworks, i.e., the interconnection of devices at the nanoscale, are based on the exchange of molecules. Amongst others, diffusion-based MC is expected to be suitable for covering short distances (nm-µm). In this work, we explore the main characteristics of diffusion-based MC through the use of N3Sim, a physical simulation framework for MC. N3Sim allows for the simulation of the physics underlying the diffusion of molecules for different scenarios. Through the N3Sim results, the Linear Time Invariant (LTI) property is proven to be a valid assumption for the free diffusion-based MC scenario. Moreover, diffusion-based noise is observed and evaluated with reference to already proposed stochastic models. The optimal pulse shape for diffusion-based MC is provided as a result of simulations. Two different pulse-based coding techniques are also compared through N3Sim in terms of available bandwidth and energy consumption for communication.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115636608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928942
E. Jaho, M. Karaliopoulos, I. Stavrakakis
This paper proposes a framework for node clustering in computerized social networks according to common interests. Communities in such networks are mainly formed by user selection, which may be based on various factors such as acquaintance, social status, educational background. However, such selection may result in groups that have a low degree of similarity. The proposed framework could improve the effectiveness of these social networks by constructing clusters of nodes with higher interest similarity, and thus maximize the benefit that users extract from their participation. The framework is based on methods for detecting communities over weighted graphs, where graph edge weights are defined based on measures of similarity between nodes' interests in certain thematic areas. The capacity of these measures to enhance the sensitivity and resolution of community detection is evaluated with concrete benchmark scenarios over synthetic networks. We also use the framework to assess the level of common interests among sample users of a popular online social application. Our results confirm that clusters formed by user selection have low degrees of similarity; our framework could, hence, be valuable in forming communities with higher coherence of interests.
{"title":"ISCoDe: A framework for interest similarity-based community detection in social networks","authors":"E. Jaho, M. Karaliopoulos, I. Stavrakakis","doi":"10.1109/INFCOMW.2011.5928942","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928942","url":null,"abstract":"This paper proposes a framework for node clustering in computerized social networks according to common interests. Communities in such networks are mainly formed by user selection, which may be based on various factors such as acquaintance, social status, educational background. However, such selection may result in groups that have a low degree of similarity. The proposed framework could improve the effectiveness of these social networks by constructing clusters of nodes with higher interest similarity, and thus maximize the benefit that users extract from their participation. The framework is based on methods for detecting communities over weighted graphs, where graph edge weights are defined based on measures of similarity between nodes' interests in certain thematic areas. The capacity of these measures to enhance the sensitivity and resolution of community detection is evaluated with concrete benchmark scenarios over synthetic networks. We also use the framework to assess the level of common interests among sample users of a popular online social application. Our results confirm that clusters formed by user selection have low degrees of similarity; our framework could, hence, be valuable in forming communities with higher coherence of interests.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123397027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-04-10DOI: 10.1109/INFCOMW.2011.5928933
J. Liu
This paper describes a cross-layer filtering mechanism which facilitates real-time video frames to meet their stringent decoding deadlines in the existence of network congestion. The basic idea is to remove the dysfunctional video frames, which have missed their decoding deadlines, from transmission as early as possible, since they no longer serve for the functioning of a real-time media streaming application. The filtering mechanism consists of a pair of components which operate at the encoder and the decoder, respectively. The decoder-side component identifies the dysfunctional frames and sends the notifications to the encoder. The encoder-side component removes the identified dysfunctional frames from transmission. By removing dysfunctional frames from transmission, the video frames that are behind the dysfunctional frames are eligible for transmission at an earlier time and are made likely to meet their decoding deadlines. Meanwhile, removing dysfunctional frames from transmission also serves to maintain a stable and low queueing delay. The filtering mechanism relies on a user-space transport stack which enables the application-controlled transmission of data segments. The effectiveness of the filtering mechanism has been demonstrated through experiments in emulated networks.
{"title":"Hard-deadline-based frame filtering mechanism supporting the delivery of real-time video streams","authors":"J. Liu","doi":"10.1109/INFCOMW.2011.5928933","DOIUrl":"https://doi.org/10.1109/INFCOMW.2011.5928933","url":null,"abstract":"This paper describes a cross-layer filtering mechanism which facilitates real-time video frames to meet their stringent decoding deadlines in the existence of network congestion. The basic idea is to remove the dysfunctional video frames, which have missed their decoding deadlines, from transmission as early as possible, since they no longer serve for the functioning of a real-time media streaming application. The filtering mechanism consists of a pair of components which operate at the encoder and the decoder, respectively. The decoder-side component identifies the dysfunctional frames and sends the notifications to the encoder. The encoder-side component removes the identified dysfunctional frames from transmission. By removing dysfunctional frames from transmission, the video frames that are behind the dysfunctional frames are eligible for transmission at an earlier time and are made likely to meet their decoding deadlines. Meanwhile, removing dysfunctional frames from transmission also serves to maintain a stable and low queueing delay. The filtering mechanism relies on a user-space transport stack which enables the application-controlled transmission of data segments. The effectiveness of the filtering mechanism has been demonstrated through experiments in emulated networks.","PeriodicalId":402219,"journal":{"name":"2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124906857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}