Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651980
Jinbin Hu, Jiawei Huang, Zhaoyi Li, Yijun Li, Wenchao Jiang, Kai Chen, Jianxin Wang, Tian He
Modern datacenter applications bring fundamental challenges to transport protocols as they simultaneously require low latency and high throughput. Recent receiver-driven trans-port protocols transmit only one data packet once receiving each grant or credit packet from the receiver to achieve ultra-low queueing delay and zero packet loss. However, the round-trip time variation and the highly dynamic background traffic significantly deteriorate the performance of receiver-driven transport protocols, resulting in under-utilized bandwidth. This paper designs a simple yet effective solution called RPO that retains the advantages of receiver-driven transmission while efficiently utilizing the available bandwidth. Specifically, RPO rationally uses low-priority opportunistic packets to ensure high network utilization without increasing the queueing delay of high-priority normal packets. In addition, since RPO only uses Explicit Congestion Notification (ECN) marking function and priority queues, RPO is ready to deploy on switches. We implement RPO in Linux hosts with DPDK. Our small-scale testbed experiments and large-scale simulations show that RPO significantly improves the network utilization by up to 35% under high workload over the state-of-the-art receiver-driven transmission schemes, without introducing additional queueing delay.
{"title":"RPO: Receiver-driven Transport Protocol Using Opportunistic Transmission in Data Center","authors":"Jinbin Hu, Jiawei Huang, Zhaoyi Li, Yijun Li, Wenchao Jiang, Kai Chen, Jianxin Wang, Tian He","doi":"10.1109/ICNP52444.2021.9651980","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651980","url":null,"abstract":"Modern datacenter applications bring fundamental challenges to transport protocols as they simultaneously require low latency and high throughput. Recent receiver-driven trans-port protocols transmit only one data packet once receiving each grant or credit packet from the receiver to achieve ultra-low queueing delay and zero packet loss. However, the round-trip time variation and the highly dynamic background traffic significantly deteriorate the performance of receiver-driven transport protocols, resulting in under-utilized bandwidth. This paper designs a simple yet effective solution called RPO that retains the advantages of receiver-driven transmission while efficiently utilizing the available bandwidth. Specifically, RPO rationally uses low-priority opportunistic packets to ensure high network utilization without increasing the queueing delay of high-priority normal packets. In addition, since RPO only uses Explicit Congestion Notification (ECN) marking function and priority queues, RPO is ready to deploy on switches. We implement RPO in Linux hosts with DPDK. Our small-scale testbed experiments and large-scale simulations show that RPO significantly improves the network utilization by up to 35% under high workload over the state-of-the-art receiver-driven transmission schemes, without introducing additional queueing delay.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125670681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651922
Xinle Du, Tong Li, Lei Xu, Kai Zheng, Meng Shen, Bo Wu, Ke Xu
TCP incast has become a practical problem for high-bandwidth, low-latency transmissions, resulting in throughput degradation of up to 90% and delays of hundreds of milliseconds, severely impacting application performance. However, in virtualized multi-tenant data centers, host-based advancements in the TCP stack are hard to deploy from the operators perspective. Operators only provide infrastructure in the form of virtual machines, in which only tenants can directly modify the end-host TCP stack. In this paper, we present R-AQM, a switch-powered reverse ACK active queue management (R-AQM) mechanism for enhancing ACK-clocking effects through assisting legacy TCP. Specifically, R-AQM proactively intercepts ACKs and paces the ACK-clocked in-flight data packets, preventing TCP from suffering incast collapse. We implement and evaluate R-AQM in NS-3 simulation and NetFPGA-based hardware switch. Both simulation and testbed results show that R-AQM greatly improves TCP performance under heavy incast workloads by significantly lowering packet loss rate, reducing retransmission timeouts, and supporting 16 times (i.e., 60 → 1000) more senders. Meanwhile, the forward queuing delays are also reduced by 4.6 times.
{"title":"R-AQM: Reverse ACK Active Queue Management in Multi-tenant Data Centers","authors":"Xinle Du, Tong Li, Lei Xu, Kai Zheng, Meng Shen, Bo Wu, Ke Xu","doi":"10.1109/ICNP52444.2021.9651922","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651922","url":null,"abstract":"TCP incast has become a practical problem for high-bandwidth, low-latency transmissions, resulting in throughput degradation of up to 90% and delays of hundreds of milliseconds, severely impacting application performance. However, in virtualized multi-tenant data centers, host-based advancements in the TCP stack are hard to deploy from the operators perspective. Operators only provide infrastructure in the form of virtual machines, in which only tenants can directly modify the end-host TCP stack. In this paper, we present R-AQM, a switch-powered reverse ACK active queue management (R-AQM) mechanism for enhancing ACK-clocking effects through assisting legacy TCP. Specifically, R-AQM proactively intercepts ACKs and paces the ACK-clocked in-flight data packets, preventing TCP from suffering incast collapse. We implement and evaluate R-AQM in NS-3 simulation and NetFPGA-based hardware switch. Both simulation and testbed results show that R-AQM greatly improves TCP performance under heavy incast workloads by significantly lowering packet loss rate, reducing retransmission timeouts, and supporting 16 times (i.e., 60 → 1000) more senders. Meanwhile, the forward queuing delays are also reduced by 4.6 times.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127267296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651929
Jacob Davis, C. Deccio
The Domain Name System (DNS) has been frequently abused for distributed denial-of-service (DDoS) attacks and cache poisoning because it relies on the User Datagram Protocol (UDP). Since UDP is connection-less, it is trivial for an attacker to spoof the source of a DNS query or response. While other secure transport mechanisms provide identity management, such as the Transmission Control Protocol (TCP) and DNS Cookies, there is currently no method for a client to state that they only use a given protocol. This paper presents a new method to allow protocol enforcement: DNS Protocol Advertisement Records (DPAR). Advertisement records allow Internet Protocol (IP) address subnets to post a public record in the reverse DNS zone stating which DNS mechanisms are used by their clients. DNS servers may then look up this record and require a client to use the stated mechanism, in turn preventing an attacker from sending spoofed messages over UDP. In this paper, we define the specification for DNS Protocol Advertisement Records, considerations that were made, and comparisons to alternative approaches. We additionally estimate the effectiveness of advertisements in preventing DDoS attacks and the expected burden to DNS servers.
{"title":"Advertising DNS Protocol Use to Mitigate DDoS Attacks","authors":"Jacob Davis, C. Deccio","doi":"10.1109/ICNP52444.2021.9651929","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651929","url":null,"abstract":"The Domain Name System (DNS) has been frequently abused for distributed denial-of-service (DDoS) attacks and cache poisoning because it relies on the User Datagram Protocol (UDP). Since UDP is connection-less, it is trivial for an attacker to spoof the source of a DNS query or response. While other secure transport mechanisms provide identity management, such as the Transmission Control Protocol (TCP) and DNS Cookies, there is currently no method for a client to state that they only use a given protocol. This paper presents a new method to allow protocol enforcement: DNS Protocol Advertisement Records (DPAR). Advertisement records allow Internet Protocol (IP) address subnets to post a public record in the reverse DNS zone stating which DNS mechanisms are used by their clients. DNS servers may then look up this record and require a client to use the stated mechanism, in turn preventing an attacker from sending spoofed messages over UDP. In this paper, we define the specification for DNS Protocol Advertisement Records, considerations that were made, and comparisons to alternative approaches. We additionally estimate the effectiveness of advertisements in preventing DDoS attacks and the expected burden to DNS servers.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130886563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651963
Siyuan Sheng, Qun Huang, P. Lee
In-band network telemetry (INT) enriches network management at scale through the embedding of complete device-internal states into each packet along its forwarding path, yet such embedding of INT information also incurs significant band-width overhead in the data plane. We propose DeltaINT, a general INT framework that achieves extremely low bandwidth overhead and supports various packet-level and flow-level applications in network management. DeltaINT builds on the insight that state changes are often negligible at most time, so it embeds a state into a packet only when the state change is deemed significant. We theoretically derive the time/space complexities and the bounds of bandwidth mitigation for DeltaINT. We implement DeltaINT in both software and P4. Our evaluation shows that DeltaINT reduces up to 93% of INT bandwidth, and its deployment in a Barefoot Tofino switch incurs limited hardware resource usage.
{"title":"DeltaINT: Toward General In-band Network Telemetry with Extremely Low Bandwidth Overhead","authors":"Siyuan Sheng, Qun Huang, P. Lee","doi":"10.1109/ICNP52444.2021.9651963","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651963","url":null,"abstract":"In-band network telemetry (INT) enriches network management at scale through the embedding of complete device-internal states into each packet along its forwarding path, yet such embedding of INT information also incurs significant band-width overhead in the data plane. We propose DeltaINT, a general INT framework that achieves extremely low bandwidth overhead and supports various packet-level and flow-level applications in network management. DeltaINT builds on the insight that state changes are often negligible at most time, so it embeds a state into a packet only when the state change is deemed significant. We theoretically derive the time/space complexities and the bounds of bandwidth mitigation for DeltaINT. We implement DeltaINT in both software and P4. Our evaluation shows that DeltaINT reduces up to 93% of INT bandwidth, and its deployment in a Barefoot Tofino switch incurs limited hardware resource usage.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133593632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651939
YuXin Zhao
Virtualization of the programmable data plane allows multiple virtual pipelines to be placed on the same physical programmable device, enabling more flexible network function composition, debugging, etc. Existing proposals realize virtualization with a hypervisor-like program to emulate users’ programs, which becomes the mainstream of the current methods. In spite of the progress achieved, their designs lack study of how to load other programs on this hypervisor. In this poster, we present HyperC, the first compiler for virtualization in programmable data plane, which helps to build a complete virtualization system. HyperC specially optimizes its IR, which makes the hypervisor acquire a decreasing delay by 26.3% on average. At the same time, we solve the placement problem of different users under the restriction of virtual plane resources.
{"title":"Poster : Loading Programmable Data Plane Programs to Virtual Plane","authors":"YuXin Zhao","doi":"10.1109/ICNP52444.2021.9651939","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651939","url":null,"abstract":"Virtualization of the programmable data plane allows multiple virtual pipelines to be placed on the same physical programmable device, enabling more flexible network function composition, debugging, etc. Existing proposals realize virtualization with a hypervisor-like program to emulate users’ programs, which becomes the mainstream of the current methods. In spite of the progress achieved, their designs lack study of how to load other programs on this hypervisor. In this poster, we present HyperC, the first compiler for virtualization in programmable data plane, which helps to build a complete virtualization system. HyperC specially optimizes its IR, which makes the hypervisor acquire a decreasing delay by 26.3% on average. At the same time, we solve the placement problem of different users under the restriction of virtual plane resources.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114969877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651950
Zeqi Lai, Hewu Li, Qi Zhang, Qian Wu, Jianping Wu
Internet content providers typically exploit cloud-based content delivery/distribution networks (CDNs) to provide wide-area data access with high availability and low latency. However, from a global perspective, a large portion of users still suffer from high content access latency due to the insufficient deployment of terrestrial cloud infrastructures.This paper presents StarFront, a cost-effective content distribution framework to optimize global CDNs and enable low content access latency anywhere. StarFront builds CDNs upon emerging low Earth orbit (LEO) constellations and existing cloud platforms to satisfy the low-latency requirements while minimizing the operational cost. Specifically, StarFront exploits a key insight that emerging mega-constellations will consist of thousands of LEO satellites equipped with high-speed data links and storage, and thus can potentially work as "cache in space" to enable pervasive and low-latency data access. StarFront judiciously places replicas on either LEO satellites or clouds, and dynamically assigns user requests to proper cache servers based on constellation parameters, cloud/user distributions and pricing policies. Extensive trace-driven evaluations covering geo-distributed vantage points have demonstrated that: StarFront can effectively reduce the global content access latency with acceptable operational cost under representative CDN traffic.
{"title":"Cooperatively Constructing Cost-Effective Content Distribution Networks upon Emerging Low Earth Orbit Satellites and Clouds","authors":"Zeqi Lai, Hewu Li, Qi Zhang, Qian Wu, Jianping Wu","doi":"10.1109/ICNP52444.2021.9651950","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651950","url":null,"abstract":"Internet content providers typically exploit cloud-based content delivery/distribution networks (CDNs) to provide wide-area data access with high availability and low latency. However, from a global perspective, a large portion of users still suffer from high content access latency due to the insufficient deployment of terrestrial cloud infrastructures.This paper presents StarFront, a cost-effective content distribution framework to optimize global CDNs and enable low content access latency anywhere. StarFront builds CDNs upon emerging low Earth orbit (LEO) constellations and existing cloud platforms to satisfy the low-latency requirements while minimizing the operational cost. Specifically, StarFront exploits a key insight that emerging mega-constellations will consist of thousands of LEO satellites equipped with high-speed data links and storage, and thus can potentially work as \"cache in space\" to enable pervasive and low-latency data access. StarFront judiciously places replicas on either LEO satellites or clouds, and dynamically assigns user requests to proper cache servers based on constellation parameters, cloud/user distributions and pricing policies. Extensive trace-driven evaluations covering geo-distributed vantage points have demonstrated that: StarFront can effectively reduce the global content access latency with acceptable operational cost under representative CDN traffic.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116336266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The conventional IP address is designed with fixed length and lacking of extensibility, while the demand for addresses varies greatly in different scenarios. Flexible IP (FlexIP), as a variable length IP address, proactively makes address structure flexible enough to adapt to various network cases. Different lengths of the addresses could be used to accommodate different demands. However, how to efficiently addressing with length variable addresses is still a problem to be solved. The Bloom filter-based addressing scheme appears to be an excellent candidate with the possibility of compact storage and efficient member query. In this paper, we propose an OBF-based scheme using only one Bloom filter. While keeping nearly the same false positive ratio as the conventional Bloom filter-based scheme, the OBF-based scheme significantly improves the addressing efficiency. OBF-based has two key features, one is that it achieves constant, yet small IP lookup time, and another is that it is insensitive to the length of the address. Simulation results show that the addressing scheme we proposed is more suitable for FlexIP addressing than well known schemes.
{"title":"OBF: A Guaranteed IP Lookup Performance Scheme for Flexible IP Using One Bloom Filter","authors":"Shi-Hai Liu, Wanming Luo, Xu Zhou, Bin Yang, YiHao Jia, Zhe Chen, Sheng Jiang","doi":"10.1109/ICNP52444.2021.9651925","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651925","url":null,"abstract":"The conventional IP address is designed with fixed length and lacking of extensibility, while the demand for addresses varies greatly in different scenarios. Flexible IP (FlexIP), as a variable length IP address, proactively makes address structure flexible enough to adapt to various network cases. Different lengths of the addresses could be used to accommodate different demands. However, how to efficiently addressing with length variable addresses is still a problem to be solved. The Bloom filter-based addressing scheme appears to be an excellent candidate with the possibility of compact storage and efficient member query. In this paper, we propose an OBF-based scheme using only one Bloom filter. While keeping nearly the same false positive ratio as the conventional Bloom filter-based scheme, the OBF-based scheme significantly improves the addressing efficiency. OBF-based has two key features, one is that it achieves constant, yet small IP lookup time, and another is that it is insensitive to the length of the address. Simulation results show that the addressing scheme we proposed is more suitable for FlexIP addressing than well known schemes.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121509455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651944
Chuwen Zhang, Zhikang Chen, Haoyu Song, Ruyi Yao, Yang Xu, Yi Wang, J. Miao, B. Liu
Time Sensitive Networking (TSN) is an emerging Ethernet technology for real-time systems. To address different Quality-of-Service (QoS) requirements of applications, IEEE 802.1 TSN Task Group has standardized several packet scheduling and shaping algorithms. The software implementation of these algorithms is hard to meet the performance requirements, while the hardware implementation in Application-Specific Integrated Circuit (ASIC) is inflexible. A hardware-programmable scheduler is necessary to deal with this dilemma. Among the existing primitives, the most expressive one is Push-In-Extract-Out (PIEO), but its complexity makes the implementation very expensive. A relatively lower-cost implementation of PIEO cannot guarantee the scheduling correctness for the most critical Time-Triggered (TT) traffic in TSN. As a remedy, in this paper we propose a new Push-In-Pick-Out (PIPO) primitive under a TSN programmable scheduling framework. Composed of simple priority queues, PIPO can express all existing TSN scheduling and shaping algorithms, and is flexible enough to support future ones. Our PIPO implementation guarantees the TT traffic scheduling correctness. The simulation results corroborate the theoretical analysis that the low-cost PIPO can closely approximate PIEO and sustain a high bandwidth utilization. The prototype on Xilinx FPGA shows that, with 2,048 inputs, the PIPO-based scheduler achieves a throughput of 70 Mpps, which is 1.64x higher than the PIEO-based one, but using only 14.7% Look-Up Tables (LUTs) and 40.5% Block RAMs of the latter.
{"title":"PIPO: Efficient Programmable Scheduling for Time Sensitive Networking","authors":"Chuwen Zhang, Zhikang Chen, Haoyu Song, Ruyi Yao, Yang Xu, Yi Wang, J. Miao, B. Liu","doi":"10.1109/ICNP52444.2021.9651944","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651944","url":null,"abstract":"Time Sensitive Networking (TSN) is an emerging Ethernet technology for real-time systems. To address different Quality-of-Service (QoS) requirements of applications, IEEE 802.1 TSN Task Group has standardized several packet scheduling and shaping algorithms. The software implementation of these algorithms is hard to meet the performance requirements, while the hardware implementation in Application-Specific Integrated Circuit (ASIC) is inflexible. A hardware-programmable scheduler is necessary to deal with this dilemma. Among the existing primitives, the most expressive one is Push-In-Extract-Out (PIEO), but its complexity makes the implementation very expensive. A relatively lower-cost implementation of PIEO cannot guarantee the scheduling correctness for the most critical Time-Triggered (TT) traffic in TSN. As a remedy, in this paper we propose a new Push-In-Pick-Out (PIPO) primitive under a TSN programmable scheduling framework. Composed of simple priority queues, PIPO can express all existing TSN scheduling and shaping algorithms, and is flexible enough to support future ones. Our PIPO implementation guarantees the TT traffic scheduling correctness. The simulation results corroborate the theoretical analysis that the low-cost PIPO can closely approximate PIEO and sustain a high bandwidth utilization. The prototype on Xilinx FPGA shows that, with 2,048 inputs, the PIPO-based scheduler achieves a throughput of 70 Mpps, which is 1.64x higher than the PIEO-based one, but using only 14.7% Look-Up Tables (LUTs) and 40.5% Block RAMs of the latter.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"25 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125980091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651914
Xia Cheng, M. Sha
Recent years have witnessed rapid adoption of low-power Wireless Sensor-Actuator Networks (WSANs) in process industries. To meet the critical demand for reliable and real-time communication in harsh industrial environments, the industrial WSAN standards, such as WirelessHART, ISA100, WIA-FA, and 6TiSCH, make a set of specific design choices, such as employing the Time Slotted Channel Hopping (TSCH) technique. Such design choices distinguish industrial WSANs from traditional Wireless Sensor Networks (WSNs), which were designed for best-effort services. Recently, there has been increasing interest in developing new methods to enable autonomous transmission scheduling for industrial WSANs that run TSCH and the Routing Protocol for Low-Power and Lossy Networks (RPL). Our study shows that the current approaches fail to consider the traffic loads of different devices when assigning time slots and channels, which significantly compromises network performance when facing high data rates. In this paper, we introduce ATRIA, a novel Autonomous Traffic-Aware transmission scheduling method for industrial WSANs. The device that runs ATRIA can detect its traffic load based on its local routing information and then schedule its transmissions accordingly without the need to exchange information with neighboring devices. Experimental results show that ATRIA provides significantly higher end-to-end network reliability and lower end-to-end latency without introducing additional overhead compared with a state-of-the-art baseline.
{"title":"ATRIA: Autonomous Traffic-Aware Scheduling for Industrial Wireless Sensor-Actuator Networks","authors":"Xia Cheng, M. Sha","doi":"10.1109/ICNP52444.2021.9651914","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651914","url":null,"abstract":"Recent years have witnessed rapid adoption of low-power Wireless Sensor-Actuator Networks (WSANs) in process industries. To meet the critical demand for reliable and real-time communication in harsh industrial environments, the industrial WSAN standards, such as WirelessHART, ISA100, WIA-FA, and 6TiSCH, make a set of specific design choices, such as employing the Time Slotted Channel Hopping (TSCH) technique. Such design choices distinguish industrial WSANs from traditional Wireless Sensor Networks (WSNs), which were designed for best-effort services. Recently, there has been increasing interest in developing new methods to enable autonomous transmission scheduling for industrial WSANs that run TSCH and the Routing Protocol for Low-Power and Lossy Networks (RPL). Our study shows that the current approaches fail to consider the traffic loads of different devices when assigning time slots and channels, which significantly compromises network performance when facing high data rates. In this paper, we introduce ATRIA, a novel Autonomous Traffic-Aware transmission scheduling method for industrial WSANs. The device that runs ATRIA can detect its traffic load based on its local routing information and then schedule its transmissions accordingly without the need to exchange information with neighboring devices. Experimental results show that ATRIA provides significantly higher end-to-end network reliability and lower end-to-end latency without introducing additional overhead compared with a state-of-the-art baseline.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.1109/ICNP52444.2021.9651951
Lin Jin, Shuai Hao, Yan Huang, Haining Wang, Chase Cotton
Domain Name System (DNS) is known to present privacy concerns. To this end, decentralized blockchains have been used to host DNS records, so that users can synchronize with the blockchain to maintain a local DNS database and resolve domain names locally. However, existing blockchain-based solutions either do not guarantee a domain name is controlled by its "true" owner; or have to resort to DNSSEC, a not yet widely adopted protocol, for verifying ownership. In this paper, we present DNSonChain, a new blockchain-based naming service compatible with DNS. It allows domain owners to claim their domain ownership on the blockchain where DNS records are hosted. The core function of DNSonChain is to validate the domain ownership in a decentralized manner. We propose a majority vote mechanism that randomly selects multiple participants (i.e., voters) in the system to vote for the authority of domain ownership. To provide resistance to attacks from fraudulent voters, DNSonChain requires two rounds of voting processes. Our security analysis shows that DNSonChain is robust against several types of security failures, able to recover from various attacks. We implemented a prototype of DNSonChain as an Ethereum decentralized application and evaluate it on an Ethereum Testnet.
{"title":"DNSonChain: Delegating Privacy-Preserved DNS Resolution to Blockchain","authors":"Lin Jin, Shuai Hao, Yan Huang, Haining Wang, Chase Cotton","doi":"10.1109/ICNP52444.2021.9651951","DOIUrl":"https://doi.org/10.1109/ICNP52444.2021.9651951","url":null,"abstract":"Domain Name System (DNS) is known to present privacy concerns. To this end, decentralized blockchains have been used to host DNS records, so that users can synchronize with the blockchain to maintain a local DNS database and resolve domain names locally. However, existing blockchain-based solutions either do not guarantee a domain name is controlled by its \"true\" owner; or have to resort to DNSSEC, a not yet widely adopted protocol, for verifying ownership. In this paper, we present DNSonChain, a new blockchain-based naming service compatible with DNS. It allows domain owners to claim their domain ownership on the blockchain where DNS records are hosted. The core function of DNSonChain is to validate the domain ownership in a decentralized manner. We propose a majority vote mechanism that randomly selects multiple participants (i.e., voters) in the system to vote for the authority of domain ownership. To provide resistance to attacks from fraudulent voters, DNSonChain requires two rounds of voting processes. Our security analysis shows that DNSonChain is robust against several types of security failures, able to recover from various attacks. We implemented a prototype of DNSonChain as an Ethereum decentralized application and evaluate it on an Ethereum Testnet.","PeriodicalId":343813,"journal":{"name":"2021 IEEE 29th International Conference on Network Protocols (ICNP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114710912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}