Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543768
Flaviano Di Rienzo, M. Girolami, S. Chessa, F. Paparella, A. Caruso
Underwater communications through acoustic modems rise several networking challenges for the Underwater Acoustic Sensor Networks (UASN). In particular, opportunistic routing is a novel but promising technique that can remarkably increase the reliability of the UASN, but its use in this context requires studies on the nature of mobility in UASN. Our goal is to study a real-world mobility dataset obtained from the Argo project. In particular, we observe the mobility of 51 free-drifting floats deployed on the Mediterranean Sea for approximately one year and we analyze some important properties of the underwater network we built. Specifically, we analyze the contact-time, inter-contact time as well density and network degree while varying the connectivity degree of the whole dataset. We then consider three known routing algorithms, namely Epidemic, PROPHET and Direct Delivery, with the goal of measuring their performance in real conditions for USAN. We finally discuss the opportunities arising from the adoption of opportunistic routing in UASN showing that, even in a very sparse and strongly disconnected network, it is still possible to build a limited but working networking framework.
{"title":"Signals from the depths: Properties of percolation strategies with the Argo dataset","authors":"Flaviano Di Rienzo, M. Girolami, S. Chessa, F. Paparella, A. Caruso","doi":"10.1109/ISCC.2016.7543768","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543768","url":null,"abstract":"Underwater communications through acoustic modems rise several networking challenges for the Underwater Acoustic Sensor Networks (UASN). In particular, opportunistic routing is a novel but promising technique that can remarkably increase the reliability of the UASN, but its use in this context requires studies on the nature of mobility in UASN. Our goal is to study a real-world mobility dataset obtained from the Argo project. In particular, we observe the mobility of 51 free-drifting floats deployed on the Mediterranean Sea for approximately one year and we analyze some important properties of the underwater network we built. Specifically, we analyze the contact-time, inter-contact time as well density and network degree while varying the connectivity degree of the whole dataset. We then consider three known routing algorithms, namely Epidemic, PROPHET and Direct Delivery, with the goal of measuring their performance in real conditions for USAN. We finally discuss the opportunities arising from the adoption of opportunistic routing in UASN showing that, even in a very sparse and strongly disconnected network, it is still possible to build a limited but working networking framework.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128570841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543829
Rashid Morady, D. Dal
Multiprocessor task scheduling is one of the hardest combinatorial optimization problems in parallel and distributed systems. It is known as NP-hard and therefore, scanning the whole search space using an exact algorithm to find the optimal solution is not practical. Instead, metaheuristics are mostly employed to find a near-optimal solution in a reasonable amount of time. In this paper, a multi-population based parallel genetic algorithm is presented for the optimization of multiprocessor task scheduling in the presence of communication costs. To the best of our knowledge, this parallel genetic algorithm approach is applied to the problem at hand for the first time using a benchmark set that includes well-known task graphs from different sources. Our experiments conducted on several task graphs with different sizes from the benchmark set showed the superiority of the approach over a conventional genetic algorithm and the related works in the literature in terms of two different performance metrics. Our parallel implementation not only decreased the execution time but also increased the quality of the scheduling solutions considerably.
{"title":"A multi-population based parallel genetic algorithm for multiprocessor task scheduling with Communication Costs","authors":"Rashid Morady, D. Dal","doi":"10.1109/ISCC.2016.7543829","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543829","url":null,"abstract":"Multiprocessor task scheduling is one of the hardest combinatorial optimization problems in parallel and distributed systems. It is known as NP-hard and therefore, scanning the whole search space using an exact algorithm to find the optimal solution is not practical. Instead, metaheuristics are mostly employed to find a near-optimal solution in a reasonable amount of time. In this paper, a multi-population based parallel genetic algorithm is presented for the optimization of multiprocessor task scheduling in the presence of communication costs. To the best of our knowledge, this parallel genetic algorithm approach is applied to the problem at hand for the first time using a benchmark set that includes well-known task graphs from different sources. Our experiments conducted on several task graphs with different sizes from the benchmark set showed the superiority of the approach over a conventional genetic algorithm and the related works in the literature in terms of two different performance metrics. Our parallel implementation not only decreased the execution time but also increased the quality of the scheduling solutions considerably.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128675581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543752
F. Murgia, I. Tagliente, I. Zoppis, G. Mauri, F. Sicurello, F. Bella, Vanessa Mercuri, E. Santoro, G. Castelnuovo, S. Bella
Since 2001, in the Cystic Fibrosis Center of the Pediatric Hospital Bambino Gesù in Rome, we use telemedicine for monitoring of our patients. While in our first published works reporting this experience, we showed statistically significant reduction in hospital admissions and a tendency over time towards a better stability of the respiratory function for telehomecare (THC) patients, here we focus on the trend of the Forced Expiratory Volume in the first second (FEV1). In particular, we investigate the evolution of the clinical trend of the FEV1 index, by monitoring the activities of home patients from 2011 to 2014. THC is applied in addition to the standard therapeutic protocol by following 16 Cystic Fibrosis (CF) patients with specialized doctors. Our results show that THC patients improve their FEV1 values with a trend which can be considered significantly better than the one reported by the control group.
{"title":"Trend of FEV1 in Cystic Fibrosis patients: A telehomecare experience","authors":"F. Murgia, I. Tagliente, I. Zoppis, G. Mauri, F. Sicurello, F. Bella, Vanessa Mercuri, E. Santoro, G. Castelnuovo, S. Bella","doi":"10.1109/ISCC.2016.7543752","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543752","url":null,"abstract":"Since 2001, in the Cystic Fibrosis Center of the Pediatric Hospital Bambino Gesù in Rome, we use telemedicine for monitoring of our patients. While in our first published works reporting this experience, we showed statistically significant reduction in hospital admissions and a tendency over time towards a better stability of the respiratory function for telehomecare (THC) patients, here we focus on the trend of the Forced Expiratory Volume in the first second (FEV1). In particular, we investigate the evolution of the clinical trend of the FEV1 index, by monitoring the activities of home patients from 2011 to 2014. THC is applied in addition to the standard therapeutic protocol by following 16 Cystic Fibrosis (CF) patients with specialized doctors. Our results show that THC patients improve their FEV1 values with a trend which can be considered significantly better than the one reported by the control group.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130393996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543834
M. L. D. Vedova, D. Tessera, M. Calzarossa
Resource provisioning and task scheduling in Cloud environments are quite challenging because of the fluctuating workload patterns and of the unpredictable behaviors and unstable performance of the infrastructure. It is therefore important to properly master the uncertainties associated with Cloud workloads and infrastructure. In this paper, we propose a probabilistic approach for resource provisioning and task scheduling that allows users to estimate in advance, i.e., offline, the resources to be provisioned, thus reducing the risk and the impact of overprovisioning or underprovisioning. In particular, we formulate an optimization problem whose objective is to identify scheduling plans that minimize the overall monetary cost for leasing Cloud resources subject to some workload constraints. This cost-aware model ensures that the execution time of an application does not exceed with a given probability a specified deadline, even in presence of uncertainties. To evaluate the behavior and sensitivity to uncertainties of the proposed approach, we simulate a simple batch workload consisting of MapReduce jobs. The experimental results show that, despite the provisioning and scheduling approaches that do not take into account the uncertainties in their decision process, our probabilistic approach nicely adapts to workload and Cloud uncertainties.
{"title":"Probabilistic provisioning and scheduling in uncertain Cloud environments","authors":"M. L. D. Vedova, D. Tessera, M. Calzarossa","doi":"10.1109/ISCC.2016.7543834","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543834","url":null,"abstract":"Resource provisioning and task scheduling in Cloud environments are quite challenging because of the fluctuating workload patterns and of the unpredictable behaviors and unstable performance of the infrastructure. It is therefore important to properly master the uncertainties associated with Cloud workloads and infrastructure. In this paper, we propose a probabilistic approach for resource provisioning and task scheduling that allows users to estimate in advance, i.e., offline, the resources to be provisioned, thus reducing the risk and the impact of overprovisioning or underprovisioning. In particular, we formulate an optimization problem whose objective is to identify scheduling plans that minimize the overall monetary cost for leasing Cloud resources subject to some workload constraints. This cost-aware model ensures that the execution time of an application does not exceed with a given probability a specified deadline, even in presence of uncertainties. To evaluate the behavior and sensitivity to uncertainties of the proposed approach, we simulate a simple batch workload consisting of MapReduce jobs. The experimental results show that, despite the provisioning and scheduling approaches that do not take into account the uncertainties in their decision process, our probabilistic approach nicely adapts to workload and Cloud uncertainties.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131050897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543709
S. Cuomo, P. D. Michele, A. Galletti, L. Marcellino
We focus on the Overcomplete Local Principal Component Analysis (OLPCA) method, which is widely adopted as denoising filter. We propose a programming approach resorting to Graphic Processor Units (GPUs), in order to massively parallelize some heavy computational tasks of the method. In our approach, we design and implement a parallel version of the OLPCA, by using a suitable mapping of the tasks on a GPU architecture with the aim to investigate the performance and the denoising features of the algorithm. The experimental results show improvements in terms of GFlops and memory throughput.
{"title":"A GPU parallel implementation of the Local Principal Component Analysis overcomplete method for DW image denoising","authors":"S. Cuomo, P. D. Michele, A. Galletti, L. Marcellino","doi":"10.1109/ISCC.2016.7543709","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543709","url":null,"abstract":"We focus on the Overcomplete Local Principal Component Analysis (OLPCA) method, which is widely adopted as denoising filter. We propose a programming approach resorting to Graphic Processor Units (GPUs), in order to massively parallelize some heavy computational tasks of the method. In our approach, we design and implement a parallel version of the OLPCA, by using a suitable mapping of the tasks on a GPU architecture with the aim to investigate the performance and the denoising features of the algorithm. The experimental results show improvements in terms of GFlops and memory throughput.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"315 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124464880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543891
Jinghong Wu, Hong Ni, Xuewen Zeng, Liping Ding, Xiaozhou Ye
OpenFlow, which facilitates decoupling between the forwarding and control plane, is developing rapidly and has already been widely studied in various fields. OpenFlow switches perform certain actions under the guidance of flow tables, which are configured by the controller. Currently, most of the work based on OpenFlow switch is concerned with switching performance such as forwarding rate, forwarding latency and flow table efficiency. In contrast, little work has been done on storage for OpenFlow switches. However, directly imposing storage functionality on OpenFlow switches with large capacity storage devices will lead to a series of problems. The most protruding and apparent one is the degradation of forwarding rate, which is one of the most important figure of merits for OpenFlow switch. This paper analyses the typical problems in this context and proposes a novel storage approach based on Protocol Oblivious Forwarding (POF), which is an enhancement to OpenFlow-based SDN forwarding architecture. The preliminary experimental results on Linux-based POF soft switch validate the effectiveness and efficiency of our approach.
{"title":"A storage approach for OpenFlow switch based on Protocol Oblivious Forwarding","authors":"Jinghong Wu, Hong Ni, Xuewen Zeng, Liping Ding, Xiaozhou Ye","doi":"10.1109/ISCC.2016.7543891","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543891","url":null,"abstract":"OpenFlow, which facilitates decoupling between the forwarding and control plane, is developing rapidly and has already been widely studied in various fields. OpenFlow switches perform certain actions under the guidance of flow tables, which are configured by the controller. Currently, most of the work based on OpenFlow switch is concerned with switching performance such as forwarding rate, forwarding latency and flow table efficiency. In contrast, little work has been done on storage for OpenFlow switches. However, directly imposing storage functionality on OpenFlow switches with large capacity storage devices will lead to a series of problems. The most protruding and apparent one is the degradation of forwarding rate, which is one of the most important figure of merits for OpenFlow switch. This paper analyses the typical problems in this context and proposes a novel storage approach based on Protocol Oblivious Forwarding (POF), which is an enhancement to OpenFlow-based SDN forwarding architecture. The preliminary experimental results on Linux-based POF soft switch validate the effectiveness and efficiency of our approach.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114364267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543792
J. Ceron, C. Margi, L. Granville
Mechanisms to detect and analyze malicious software are essential to improve security systems. Current security mechanisms have limited success in detecting sophisticated malicious software. More than to evade analysis system, many malwares require specific conditions to activate their actions in the target system. The flexibility of Software-Defined Networking (SDN) provides an opportunity to develop a malware analysis architecture integrating different systems and networks profile configuration. In this paper we design an architecture specialized in malware analysis using SDN to dynamically reconfigure the network environment based on malware actions. As result, we demonstrate that our solution can trigger more malware's events than traditional solutions that do not consider sandbox surround environment as an important component in malware analysis.
{"title":"MARS: An SDN-based malware analysis solution","authors":"J. Ceron, C. Margi, L. Granville","doi":"10.1109/ISCC.2016.7543792","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543792","url":null,"abstract":"Mechanisms to detect and analyze malicious software are essential to improve security systems. Current security mechanisms have limited success in detecting sophisticated malicious software. More than to evade analysis system, many malwares require specific conditions to activate their actions in the target system. The flexibility of Software-Defined Networking (SDN) provides an opportunity to develop a malware analysis architecture integrating different systems and networks profile configuration. In this paper we design an architecture specialized in malware analysis using SDN to dynamically reconfigure the network environment based on malware actions. As result, we demonstrate that our solution can trigger more malware's events than traditional solutions that do not consider sandbox surround environment as an important component in malware analysis.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114455601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543765
Naila Bouchemal, Sondés Khemiri-Kallel, S. Tohmé
Long Term Evolution (LTE) networks are expected to provide enhanced quality of service (QoS) guaranteed services and a high capacity. In this paper, we propose a novel QoS guaranteed cross-layer scheduling algorithm for LTE system. In fact the LTE scheduler is responsible for efficiently allocating the radio resources among mobile users who have different QoS demands. We focus on the resource blocks allocation and the Medium Access Control (MAC) scheduling algorithm. Our study highlights the impact of the Radio Resource Control (RRC) configuration on the overall performance and the Transport Block (TB) filling during the allocation process. Performance results show that the proposed algorithm is effective in enhancing service fairness, throughput and resource optimization. Our proposal aims to find an agreement between fairness, resource availability, the requested rate and service QoS requirements.
{"title":"A cross-layer QoS solution for resource optimization in LTE networks","authors":"Naila Bouchemal, Sondés Khemiri-Kallel, S. Tohmé","doi":"10.1109/ISCC.2016.7543765","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543765","url":null,"abstract":"Long Term Evolution (LTE) networks are expected to provide enhanced quality of service (QoS) guaranteed services and a high capacity. In this paper, we propose a novel QoS guaranteed cross-layer scheduling algorithm for LTE system. In fact the LTE scheduler is responsible for efficiently allocating the radio resources among mobile users who have different QoS demands. We focus on the resource blocks allocation and the Medium Access Control (MAC) scheduling algorithm. Our study highlights the impact of the Radio Resource Control (RRC) configuration on the overall performance and the Transport Block (TB) filling during the allocation process. Performance results show that the proposed algorithm is effective in enhancing service fairness, throughput and resource optimization. Our proposal aims to find an agreement between fairness, resource availability, the requested rate and service QoS requirements.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127372442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543719
N. Dusi, I. Ferretti, M. Furini
Within social media we find many stories that tell us the world that is around us. Unfortunately, we tend to forget what happened in the past and the young generations are losing a cultural heritage passed down for generations. In the attempt of preserving memories and of intercepting the attention of the new generations, in this paper we propose PlayTheCityRE, a location-based storytelling system that merges private film memories shot from 1940 to 1989 (e.g., 8mm, super8mm) with modern communication technologies to tell the story of our past while walking in city streets. The system is provided with a mobile application that allows people to explore an unusual city through the eyes of amateur film sequences (now historic) and to select different routes that will bring them in the same city places where they were filmed. By merging film memories with modern technologies, our system engages different audiences in specific ways and on multiple levels, allowing them to walk through history. Therefore, our storytelling system may help fostering historical consciousness within our society.
{"title":"PlayTheCityRE: A visual storytelling system that transforms recorded film memories into visual history","authors":"N. Dusi, I. Ferretti, M. Furini","doi":"10.1109/ISCC.2016.7543719","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543719","url":null,"abstract":"Within social media we find many stories that tell us the world that is around us. Unfortunately, we tend to forget what happened in the past and the young generations are losing a cultural heritage passed down for generations. In the attempt of preserving memories and of intercepting the attention of the new generations, in this paper we propose PlayTheCityRE, a location-based storytelling system that merges private film memories shot from 1940 to 1989 (e.g., 8mm, super8mm) with modern communication technologies to tell the story of our past while walking in city streets. The system is provided with a mobile application that allows people to explore an unusual city through the eyes of amateur film sequences (now historic) and to select different routes that will bring them in the same city places where they were filmed. By merging film memories with modern technologies, our system engages different audiences in specific ways and on multiple levels, allowing them to walk through history. Therefore, our storytelling system may help fostering historical consciousness within our society.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131764574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-27DOI: 10.1109/ISCC.2016.7543811
S. Wang, Szu-Yu Liu, C. Chou
In this paper, we designed, implemented, and evaluated the performance of software OpenFlow flow counters in a bare metal commodity switch. Normally, flow counters are implemented in hardware for line-rate operations and their number needs to be very large to support a large flow-based SDN network. Although these hardware counters operate very fast, they greatly increase the cost of an OpenFlow switch. In addition, due to the limited chip size of the switching ASIC used in an OpenFlow switch, the number of hardware counters cannot scale to a large number. To overcome these drawbacks, we designed and implemented software flow counters in a 48 port × 10 Gbps (port bandwidth) bare metal commodity switch and evaluated their performance and limitations. This paper also reports important findings obtained from this practical work.
{"title":"Design, implementation and performance evaluation of software OpenFlow flow counters in a bare metal commodity switch","authors":"S. Wang, Szu-Yu Liu, C. Chou","doi":"10.1109/ISCC.2016.7543811","DOIUrl":"https://doi.org/10.1109/ISCC.2016.7543811","url":null,"abstract":"In this paper, we designed, implemented, and evaluated the performance of software OpenFlow flow counters in a bare metal commodity switch. Normally, flow counters are implemented in hardware for line-rate operations and their number needs to be very large to support a large flow-based SDN network. Although these hardware counters operate very fast, they greatly increase the cost of an OpenFlow switch. In addition, due to the limited chip size of the switching ASIC used in an OpenFlow switch, the number of hardware counters cannot scale to a large number. To overcome these drawbacks, we designed and implemented software flow counters in a 48 port × 10 Gbps (port bandwidth) bare metal commodity switch and evaluated their performance and limitations. This paper also reports important findings obtained from this practical work.","PeriodicalId":148096,"journal":{"name":"2016 IEEE Symposium on Computers and Communication (ISCC)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122632189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}