We address the problem of pedestrian blockage in outdoor mmWave networks, which can disrupt Line of Sight (LoS) communication and result in an outage. This necessitates either reacquisition as a new user that can take up to 1.28 sec in 5G New Radio, disrupting high-performance applications, or handover to a different base station (BS), which however requires very costly dense deployments to ensure multiple base stations are always visible to a mobile outdoors. We have found that there typically exists a strong ground reflection from concrete and gravel surfaces, within 4--6 dB of received signal strength (RSS) of LoS paths. The mobile can switch to such a Non-Line Sight (NLoS) beam to sustain the link for use as a control channel during a blockage event. This allows the mobile to maintain time-synchronization with the base station, allowing it to revert to the LoS path when the temporary blockage disappears. We present a protocol, Terra, to quickly discover, cache, and employ ground reflections. It can be used in most outdoor built environments since pedestrian blockages typically last only a few hundred milliseconds.
{"title":"Terra","authors":"Santosh Ganji, Jaewon Kim, P. R. Kumar","doi":"10.1145/3546037.3546063","DOIUrl":"https://doi.org/10.1145/3546037.3546063","url":null,"abstract":"We address the problem of pedestrian blockage in outdoor mmWave networks, which can disrupt Line of Sight (LoS) communication and result in an outage. This necessitates either reacquisition as a new user that can take up to 1.28 sec in 5G New Radio, disrupting high-performance applications, or handover to a different base station (BS), which however requires very costly dense deployments to ensure multiple base stations are always visible to a mobile outdoors. We have found that there typically exists a strong ground reflection from concrete and gravel surfaces, within 4--6 dB of received signal strength (RSS) of LoS paths. The mobile can switch to such a Non-Line Sight (NLoS) beam to sustain the link for use as a control channel during a blockage event. This allows the mobile to maintain time-synchronization with the base station, allowing it to revert to the LoS path when the temporary blockage disappears. We present a protocol, Terra, to quickly discover, cache, and employ ground reflections. It can be used in most outdoor built environments since pedestrian blockages typically last only a few hundred milliseconds.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124455258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grzegorz Jereczek, Theo Jepsen, Simon Wass, Bimmy Pujari, Jerry Zhen, Jeongkeun Lee
In-band Network Telemetry (INT) provides visibility into the state of the network and can be used for monitoring and debugging. However, existing implementations do not make telemetry available to end-hosts. In this demonstration we present TCP-INT, which delivers network telemetry directly to the hosts and show the advantages of correlating end-host state with the state of the network fabric.
{"title":"TCP-INT: lightweight network telemetry with TCP transport","authors":"Grzegorz Jereczek, Theo Jepsen, Simon Wass, Bimmy Pujari, Jerry Zhen, Jeongkeun Lee","doi":"10.1145/3546037.3546064","DOIUrl":"https://doi.org/10.1145/3546037.3546064","url":null,"abstract":"In-band Network Telemetry (INT) provides visibility into the state of the network and can be used for monitoring and debugging. However, existing implementations do not make telemetry available to end-hosts. In this demonstration we present TCP-INT, which delivers network telemetry directly to the hosts and show the advantages of correlating end-host state with the state of the network fabric.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116175393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kenji Tanaka, Y. Arikawa, T. Ito, Yukinari Matsuda, Keisuke Kamahori, Shinya Kaji, T. Sakamoto
Cloud computing reduces provider and user costs by multiplexing workloads. Advantages of cloud computing include high utilization by temporal and spatial sharing computing resources and a subscription model that charges only for the resources and time used[1]. As recent cloud computing has evolved to maximize these advantages, microservices [11], function-as-a-service (FaaS) [10], and other more granular and short-lived cloud services have emerged. Today's FaaS are oriented towards fast provisioning, fine-grained billing times, tight memory constraints, stateless processing, and real-time processing [15]. However, since the context switching in CPUs is the bottleneck [8], their processing demands are no longer satisfactorily met. Therefore, a shift is expected toward a more efficient system architecture for cloud computing [10].
{"title":"CiraaS","authors":"Kenji Tanaka, Y. Arikawa, T. Ito, Yukinari Matsuda, Keisuke Kamahori, Shinya Kaji, T. Sakamoto","doi":"10.1145/3546037.3546059","DOIUrl":"https://doi.org/10.1145/3546037.3546059","url":null,"abstract":"Cloud computing reduces provider and user costs by multiplexing workloads. Advantages of cloud computing include high utilization by temporal and spatial sharing computing resources and a subscription model that charges only for the resources and time used[1]. As recent cloud computing has evolved to maximize these advantages, microservices [11], function-as-a-service (FaaS) [10], and other more granular and short-lived cloud services have emerged. Today's FaaS are oriented towards fast provisioning, fine-grained billing times, tight memory constraints, stateless processing, and real-time processing [15]. However, since the context switching in CPUs is the bottleneck [8], their processing demands are no longer satisfactorily met. Therefore, a shift is expected toward a more efficient system architecture for cloud computing [10].","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127696518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandra Fais, G. Antichi, S. Giordano, G. Lettieri, G. Procissi
Data Stream Processing engines are emerging as a promising solution to efficiently process a continuous amount of telemetry information. In this poster, we compare four of them: Storm, Flink, Spark and WindFlow. The aim is to shed some lights on the best streaming engine for network traffic analysis.
{"title":"Mind the cost of telemetry data analysis","authors":"Alessandra Fais, G. Antichi, S. Giordano, G. Lettieri, G. Procissi","doi":"10.1145/3546037.3546052","DOIUrl":"https://doi.org/10.1145/3546037.3546052","url":null,"abstract":"Data Stream Processing engines are emerging as a promising solution to efficiently process a continuous amount of telemetry information. In this poster, we compare four of them: Storm, Flink, Spark and WindFlow. The aim is to shed some lights on the best streaming engine for network traffic analysis.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132075747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinpeng Hong, Changgang Zheng, S. Zohren, Noa Zilberman
Financial trading often relies nowadays on machine learning. However, many trading applications require very short response times, which cannot always be supported by traditional machine learning frameworks. We present Linnet, providing financial market prediction within programmable switches. Linnet builds limit order books from high-frequency market data feeds within the switch, and uses them for machine-learning based market prediction. Linnet demonstrates the potential to predict future stock price movements with high accuracy and low latency, increasing financial gains.
{"title":"Linnet: limit order books within switches","authors":"Xinpeng Hong, Changgang Zheng, S. Zohren, Noa Zilberman","doi":"10.1145/3546037.3546057","DOIUrl":"https://doi.org/10.1145/3546037.3546057","url":null,"abstract":"Financial trading often relies nowadays on machine learning. However, many trading applications require very short response times, which cannot always be supported by traditional machine learning frameworks. We present Linnet, providing financial market prediction within programmable switches. Linnet builds limit order books from high-frequency market data feeds within the switch, and uses them for machine-learning based market prediction. Linnet demonstrates the potential to predict future stock price movements with high accuracy and low latency, increasing financial gains.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121596896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Packet scheduling algorithms control the order in which a system serves network packets, which can have significant impact on system performance. Many systems rely on Shortest Job First (SJF), an important packet scheduling algorithm with many desirable properties. Classic results [3] show that SJF provably minimizes average job completion time, and recent work [1] shows that a variant of SJF also protects systems against algorithmic complexity attacks (ACAs), a particularly dangerous class of Denial-of-Service (DoS) attacks [4]. In an ACA, an adversary exploits the worst-case behavior of an algorithm in order to induce a large amount of work in the target system, causing a significant drop in goodput despite using only a small amount of attack bandwidth. SurgeProtector [1] demonstrated that using Weighted SJF (WSJF) - scheduling packets by the ratio of job size to packet size - significantly mitigates the impact of ACAs on any networked system.
{"title":"Robust heuristics: attacks and defenses for job size estimation in WSJF systems","authors":"Erica Chiang, Nirav Atre, Hugo Sadok","doi":"10.1145/3546037.3546062","DOIUrl":"https://doi.org/10.1145/3546037.3546062","url":null,"abstract":"Packet scheduling algorithms control the order in which a system serves network packets, which can have significant impact on system performance. Many systems rely on Shortest Job First (SJF), an important packet scheduling algorithm with many desirable properties. Classic results [3] show that SJF provably minimizes average job completion time, and recent work [1] shows that a variant of SJF also protects systems against algorithmic complexity attacks (ACAs), a particularly dangerous class of Denial-of-Service (DoS) attacks [4]. In an ACA, an adversary exploits the worst-case behavior of an algorithm in order to induce a large amount of work in the target system, causing a significant drop in goodput despite using only a small amount of attack bandwidth. SurgeProtector [1] demonstrated that using Weighted SJF (WSJF) - scheduling packets by the ratio of job size to packet size - significantly mitigates the impact of ACAs on any networked system.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"318 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liquid sensing utilizing the wireless signal provides a great convenience for lay consumers to ensure the quality and purity of the liquids (detecting the food additives in the beverages), and detect kidney disease (tracking the protein in the urine), which is very important for our daily life. In this paper, we propose Ls-liquid, a system that employs a commodity smartphone to sense liquid in the common containers without a specialized setup. Our work arises from the acoustic impedance property, in that different liquids have different acoustic impedances, causing reflected signals of liquids to differ. Our experimental evaluations demonstrate that Ls-liquid is able to identify one kind of food additive in four different beverages with over 90% accuracy, and can measure protein concentration under 1 mg/100 mL in the urine. Ls-liquid is also robust to the environment and container changes.
{"title":"Ls-liquid: towards container-irrelevant liquid sensing on smartphones","authors":"Xue Sun, Chao Feng","doi":"10.1145/3546037.3546053","DOIUrl":"https://doi.org/10.1145/3546037.3546053","url":null,"abstract":"Liquid sensing utilizing the wireless signal provides a great convenience for lay consumers to ensure the quality and purity of the liquids (detecting the food additives in the beverages), and detect kidney disease (tracking the protein in the urine), which is very important for our daily life. In this paper, we propose Ls-liquid, a system that employs a commodity smartphone to sense liquid in the common containers without a specialized setup. Our work arises from the acoustic impedance property, in that different liquids have different acoustic impedances, causing reflected signals of liquids to differ. Our experimental evaluations demonstrate that Ls-liquid is able to identify one kind of food additive in four different beverages with over 90% accuracy, and can measure protein concentration under 1 mg/100 mL in the urine. Ls-liquid is also robust to the environment and container changes.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116913143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabricio Rodriguez, F. Vogt, Ariel Góes De Castro, M. Schwarz, Christian Esteve Rothenberg
Alternatives to run high-fidelity network experiments are traditionally based on virtual and emulation-based environments (e.g., Mininet). While extremely useful for teaching and in support of research practices, existing experimental platforms are commonly limited to transmission speeds of 10Gbps and suffer from performance-fidelity trade-offs as well as inherent scalability constraints. With the programmability that P4 brings to networking researchers and the capabilities of new generation P4 hardware supporting the PSA (Portable Switch Architecture) and TNA (Tofino Native Architecture), it is possible to realize packet processing pipelines that emulate certain network link characteristics and instantiate a network topology to run line-rate traffic using a single physical P4 switch (e.g., Tofino). This is the main contribution of the P7 (P4 Programmable Patch Panel) emulator. In this demonstration, we show how to generate different network topologies starting from a single link to more complex network scenarios featuring various devices and paths, including varied link characteristics (e.g., latency, jitter, packet loss, bandwidth) and 100G traffic capacities.
{"title":"P4 programmable patch panel (P7): an instant 100g emulated network on your tofino-based pizza box","authors":"Fabricio Rodriguez, F. Vogt, Ariel Góes De Castro, M. Schwarz, Christian Esteve Rothenberg","doi":"10.1145/3546037.3546046","DOIUrl":"https://doi.org/10.1145/3546037.3546046","url":null,"abstract":"Alternatives to run high-fidelity network experiments are traditionally based on virtual and emulation-based environments (e.g., Mininet). While extremely useful for teaching and in support of research practices, existing experimental platforms are commonly limited to transmission speeds of 10Gbps and suffer from performance-fidelity trade-offs as well as inherent scalability constraints. With the programmability that P4 brings to networking researchers and the capabilities of new generation P4 hardware supporting the PSA (Portable Switch Architecture) and TNA (Tofino Native Architecture), it is possible to realize packet processing pipelines that emulate certain network link characteristics and instantiate a network topology to run line-rate traffic using a single physical P4 switch (e.g., Tofino). This is the main contribution of the P7 (P4 Programmable Patch Panel) emulator. In this demonstration, we show how to generate different network topologies starting from a single link to more complex network scenarios featuring various devices and paths, including varied link characteristics (e.g., latency, jitter, packet loss, bandwidth) and 100G traffic capacities.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116914090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient use of computing clusters is crucial in large-scale data centers: even small gains in utilization can save millions of dollars. However, as the number of microsecond-scale tasks increases, using a CPU to schedule tasks becomes inefficient. Cluster scheduling running within the network can solve this problem, and brings additional benefits in scalability, performance and power efficiency. However, the resource constraints of programmable network devices make network-accelerated cluster scheduling hard. In this paper we propose P4-K8s-Scheduler, a network-accelerated cluster scheduler for Kubernetes implemented on a programmable network device. Preliminary results show that by scheduling Pods in the network at line-rate, P4-K8s-Scheduler can reduce the scheduling overheads by an order of magnitude compared to state-of-the-art Kubernetes schedulers.
{"title":"Network-accelerated cluster scheduler","authors":"Radostin Stoyanov, W. Armour, Noa Zilberman","doi":"10.1145/3546037.3546050","DOIUrl":"https://doi.org/10.1145/3546037.3546050","url":null,"abstract":"Efficient use of computing clusters is crucial in large-scale data centers: even small gains in utilization can save millions of dollars. However, as the number of microsecond-scale tasks increases, using a CPU to schedule tasks becomes inefficient. Cluster scheduling running within the network can solve this problem, and brings additional benefits in scalability, performance and power efficiency. However, the resource constraints of programmable network devices make network-accelerated cluster scheduling hard. In this paper we propose P4-K8s-Scheduler, a network-accelerated cluster scheduler for Kubernetes implemented on a programmable network device. Preliminary results show that by scheduling Pods in the network at line-rate, P4-K8s-Scheduler can reduce the scheduling overheads by an order of magnitude compared to state-of-the-art Kubernetes schedulers.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134603945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic prediction aims to forecast the future traffic level based on past observations. In this paper, we conduct an empirical study of traffic prediction for a campus trace on different time scales and get the following conclusions: 1) deep learning performs well on coarser time scales; 2) with a finer-granularity of time or insufficient data, statistical and regressive models outperform; 3) For a one-week trace, the granularity of 5 minutes has the strongest predictability.
{"title":"Deep or statistical: an empirical study of traffic predictions on multiple time scales","authors":"Yu Qiao, Chengxiang Li, Shuzheng Hao, Junying Wu, Liang Zhang","doi":"10.1145/3546037.3546048","DOIUrl":"https://doi.org/10.1145/3546037.3546048","url":null,"abstract":"Traffic prediction aims to forecast the future traffic level based on past observations. In this paper, we conduct an empirical study of traffic prediction for a campus trace on different time scales and get the following conclusions: 1) deep learning performs well on coarser time scales; 2) with a finer-granularity of time or insufficient data, statistical and regressive models outperform; 3) For a one-week trace, the granularity of 5 minutes has the strongest predictability.","PeriodicalId":351682,"journal":{"name":"Proceedings of the SIGCOMM '22 Poster and Demo Sessions","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121283446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}