Jie Lian, Xu Yuan, Jiadong Lou, Li Chen, Hao Wang, Nianfeng Tzeng
The increasing prevalence of smart devices spurs the development of emerging indoor localization technologies for supporting diverse personalized applications at home. Given marked drawbacks of popular chirp signal-based approaches, we aim to develop a novel device-free localization system via the continuous wave of the inaudible frequency. To achieve this goal, solutions are developed for fine-grained analyses, able to precisely locate moving human traces in the room-scale environment. In particular, a smart speaker is controlled to emit continuous waves at inaudible 20kHz, with a co-located microphone array to record their Doppler reflections for localization. We first develop solutions to remove potential noises and then propose a novel idea by slicing signals into a set of narrowband signals, each of which is likely to include at most one body segment’s reflection. Different from previous studies, which take original signals themselves as the baseband, our solutions employ the Doppler frequency of a narrowband signal to estimate the velocity first and apply it to get the accurate baseband frequency, which permits a precise phase measurement after I-Q (i.e., in-phase and quadrature) decomposition. A signal model is then developed, able to formulate the phase with body segment’s velocity, range, and angle. We next develop novel solutions to estimate the motion state in each narrowband signal, cluster the motion states for different body segments corresponding to the same person, and locate the moving traces while mitigating multi-path effects. Our system is implemented with commodity devices in room environments for performance evaluation. The experimental results exhibit that our system can conduct effective localization for up to three persons in a room, with the average errors of 7.49cm for a single person, with 24.06cm for two persons, with 51.15cm for three persons.
{"title":"Room-Scale Location Trace Tracking via Continuous Acoustic Waves","authors":"Jie Lian, Xu Yuan, Jiadong Lou, Li Chen, Hao Wang, Nianfeng Tzeng","doi":"10.1145/3649136","DOIUrl":"https://doi.org/10.1145/3649136","url":null,"abstract":"<p>The increasing prevalence of smart devices spurs the development of emerging indoor localization technologies for supporting diverse personalized applications at home. Given marked drawbacks of popular chirp signal-based approaches, we aim to develop a novel device-free localization system via the continuous wave of the inaudible frequency. To achieve this goal, solutions are developed for fine-grained analyses, able to precisely locate moving human traces in the room-scale environment. In particular, a smart speaker is controlled to emit continuous waves at inaudible 20<i>kHz</i>, with a co-located microphone array to record their Doppler reflections for localization. We first develop solutions to remove potential noises and then propose a novel idea by slicing signals into a set of narrowband signals, each of which is likely to include at most one body segment’s reflection. Different from previous studies, which take original signals themselves as the baseband, our solutions employ the Doppler frequency of a narrowband signal to estimate the velocity first and apply it to get the accurate baseband frequency, which permits a precise phase measurement after I-Q (i.e., in-phase and quadrature) decomposition. A signal model is then developed, able to formulate the phase with body segment’s velocity, range, and angle. We next develop novel solutions to estimate the motion state in each narrowband signal, cluster the motion states for different body segments corresponding to the same person, and locate the moving traces while mitigating multi-path effects. Our system is implemented with commodity devices in room environments for performance evaluation. The experimental results exhibit that our system can conduct effective localization for up to three persons in a room, with the average errors of 7.49<i>cm</i> for a single person, with 24.06<i>cm</i> for two persons, with 51.15<i>cm</i> for three persons.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"46 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139928694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lijun Xiao, Dezhi Han, Kuan-Ching Li, Muhammad Khurram Khan
Despite considerable technological advances for smart cities, they still face problems such as instability of cloud server connection, insecurity during data transmission, and slight deficiencies in TCP/IP network architecture. To address such issues, we propose a data-driven intelligence approach to security decisions under Named Data Networking (NDN) architecture for edge computing, taking into consideration factors that impact device entry in smart cities, such as device performance, load, Bluetooth signal strength, and scan frequency. Despite existing techniques for Order Preference by Similarity to Ideal Solution (TOPSIS)-based on entropy weights methods are improved and applied, there exist unstable decision results. Due to this, we propose a technique for Order Preference by Similarity to Ideal Solution (TOPSIS)-based on utility function and entropy weights, named UETOPSIS, where the corresponding utility function is applied according to the influence of each attribute on the decision, ensuring the stability of the ranking of decision results. We rely on an entropy-based weights mechanism to select a suitable master controller for the design of the multi-control protocol in the smart city system, and utilize a utility function to calculate the attribute values and then combine the normalized attribute values of utility numbers, starting by analyzing the main work of the controllers. Lastly, a prototype is developed for performance evaluation purposes. Experimental evaluation and analysis show that the proposed work has better authenticity and reliability than existing works and can reduce the workload of edge computing devices when forwarding data, with stability 24.7% higher than TOPSIS, significantly improving the performance and stability of system fault tolerance and reliability in smart cities, as the second-ranked controller can efficiently take over the work when a central controller fails or damaged.
{"title":"UETOPSIS: A Data-Driven Intelligence Approach to Security Decisions for Edge Computing in Smart Cities","authors":"Lijun Xiao, Dezhi Han, Kuan-Ching Li, Muhammad Khurram Khan","doi":"10.1145/3648373","DOIUrl":"https://doi.org/10.1145/3648373","url":null,"abstract":"<p>Despite considerable technological advances for smart cities, they still face problems such as instability of cloud server connection, insecurity during data transmission, and slight deficiencies in TCP/IP network architecture. To address such issues, we propose a data-driven intelligence approach to security decisions under Named Data Networking (NDN) architecture for edge computing, taking into consideration factors that impact device entry in smart cities, such as device performance, load, Bluetooth signal strength, and scan frequency. Despite existing techniques for Order Preference by Similarity to Ideal Solution (TOPSIS)-based on entropy weights methods are improved and applied, there exist unstable decision results. Due to this, we propose a technique for Order Preference by Similarity to Ideal Solution (TOPSIS)-based on utility function and entropy weights, named UETOPSIS, where the corresponding utility function is applied according to the influence of each attribute on the decision, ensuring the stability of the ranking of decision results. We rely on an entropy-based weights mechanism to select a suitable master controller for the design of the multi-control protocol in the smart city system, and utilize a utility function to calculate the attribute values and then combine the normalized attribute values of utility numbers, starting by analyzing the main work of the controllers. Lastly, a prototype is developed for performance evaluation purposes. Experimental evaluation and analysis show that the proposed work has better authenticity and reliability than existing works and can reduce the workload of edge computing devices when forwarding data, with stability 24.7% higher than TOPSIS, significantly improving the performance and stability of system fault tolerance and reliability in smart cities, as the second-ranked controller can efficiently take over the work when a central controller fails or damaged.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"10 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical wireless communication (OWC) shows great potential due to its broad spectrum and the exceptional intensity switching speed of LEDs. Under poor conditions, most OWC systems switch from complex and more error prone high-order modulation schemes to more robust On-Off Keying (OOK) modulation defined in the IEEE OWC standard. This paper presents LiFOD, a high-speed indoor OOK-based OWC system with fine-grained dimming support. While ensuring fine-grained dimming, LiFOD remarkably achieves robust communication at up to 400 Kbps at a distance of 6 meters. This is the first time that the data rate has improved via OWC dimming in comparison to the previous approaches that consider trading off dimming and communication. LiFOD makes two key technical contributions. First, LiFOD utilizes Compensation Symbols (CS) as a reliable side-channel to represent bit patterns dynamically and improve throughput. We firstly design greedy-based bit pattern mining. Then we propose 2D feature enhancement via YOLO model for real-time bit pattern mining. Second, LiFOD synchronously redesigns optical symbols and CS relocation schemes for fine-grained dimming and robust decoding. Experiments on low-cost Beaglebone prototypes with commercial LED lamps and the photodiode (PD) demonstrate that LiFOD significantly outperforms the state-of-art system with 2.1x throughput on the SIGCOMM17 data-trace.
{"title":"Exploiting Fine-grained Dimming with Improved LiFi Throughput","authors":"Xiao Zhang, James Mariani, Li Xiao, Matt W. Mutka","doi":"10.1145/3643814","DOIUrl":"https://doi.org/10.1145/3643814","url":null,"abstract":"<p>Optical wireless communication (OWC) shows great potential due to its broad spectrum and the exceptional intensity switching speed of LEDs. Under poor conditions, most OWC systems switch from complex and more error prone high-order modulation schemes to more robust On-Off Keying (OOK) modulation defined in the IEEE OWC standard. This paper presents LiFOD, a high-speed indoor OOK-based OWC system with fine-grained dimming support. While ensuring fine-grained dimming, LiFOD remarkably achieves robust communication at up to 400 Kbps at a distance of 6 meters. This is the first time that the data rate has improved via OWC dimming in comparison to the previous approaches that consider trading off dimming and communication. LiFOD makes two key technical contributions. First, LiFOD utilizes Compensation Symbols (CS) as a reliable side-channel to represent bit patterns dynamically and improve throughput. We firstly design greedy-based bit pattern mining. Then we propose 2D feature enhancement via YOLO model for real-time bit pattern mining. Second, LiFOD synchronously redesigns optical symbols and CS relocation schemes for fine-grained dimming and robust decoding. Experiments on low-cost Beaglebone prototypes with commercial LED lamps and the photodiode (PD) demonstrate that LiFOD significantly outperforms the state-of-art system with 2.1x throughput on the SIGCOMM17 data-trace.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"53 29 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139761827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anderson Biegelmeyer, Alexandre dos Santos Roque, Edison Pignaton de Freitas
Nowadays In-Vehicle Wireless Sensor Networks (IVWSN) are taking place in car manufacturers because it saves time in the assembling process, saves costs in harness and after-sales, and represents less weight on vehicles helping in diminishing fuel consumption. There is no definition for wireless solution technology for IVWSN, because each one has its own characteristics, and probably this is one of the reasons for its smooth usage in the automotive industry. A gap identified in Wireless Sensor Networks (WSN) for the automotive domain is that the related literature focuses only on ordinary cars with a star topology and few of them with mesh topology. This paper aims to cover this gap by presenting an experimental study performed on verifying the new Bluetooth 5 technology working in a mesh topology applied to public transportation systems (buses). In order to perform this evaluation, a setup to emulate an IVWSN was deployed in a working city bus. Measuring the network metrics, the bus was placed under work in a variety of conditions during its trajectory to determine the influence of the passengers and the whole environment in the data transmission. The results suggest Bluetooth 5 in a mesh topology as a promising candidate for IVWSN because it showed the robustness of losing only 0.16% packets in the worst test, as well as its ability to cover a wider range compared to its previous version, indeed a better RSSI and jitter, with lower transmission power, compared to a star topology. The round trip time results can supports the analysis for time-critical applications.
{"title":"An Experimental Study on BLE 5 Mesh Applied to Public Transportation","authors":"Anderson Biegelmeyer, Alexandre dos Santos Roque, Edison Pignaton de Freitas","doi":"10.1145/3647641","DOIUrl":"https://doi.org/10.1145/3647641","url":null,"abstract":"<p>Nowadays In-Vehicle Wireless Sensor Networks (IVWSN) are taking place in car manufacturers because it saves time in the assembling process, saves costs in harness and after-sales, and represents less weight on vehicles helping in diminishing fuel consumption. There is no definition for wireless solution technology for IVWSN, because each one has its own characteristics, and probably this is one of the reasons for its smooth usage in the automotive industry. A gap identified in Wireless Sensor Networks (WSN) for the automotive domain is that the related literature focuses only on ordinary cars with a star topology and few of them with mesh topology. This paper aims to cover this gap by presenting an experimental study performed on verifying the new Bluetooth 5 technology working in a mesh topology applied to public transportation systems (buses). In order to perform this evaluation, a setup to emulate an IVWSN was deployed in a working city bus. Measuring the network metrics, the bus was placed under work in a variety of conditions during its trajectory to determine the influence of the passengers and the whole environment in the data transmission. The results suggest Bluetooth 5 in a mesh topology as a promising candidate for IVWSN because it showed the robustness of losing only 0.16% packets in the worst test, as well as its ability to cover a wider range compared to its previous version, indeed a better RSSI and jitter, with lower transmission power, compared to a star topology. The round trip time results can supports the analysis for time-critical applications.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"4 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rising demand for utilizing fine-grained data in deep-learning (DL) based intelligent systems presents challenges for the collection and transmission abilities of real-world devices. Deep compressive sensing, which employs deep learning algorithms to compress signals at the sensing stage and reconstruct them with high quality at the receiving stage, provides a state-of-the-art solution for the problem of large-scale fine-grained data. However, recent works have proven that fatal security flaws exist in current deep learning methods and such instability is universal for DL-based image reconstruction methods. In this paper, we assess the security risks introduced by deep compressive sensing in the widely-used computer vision system in the face of adversarial example attacks and poisoning attacks. To implement the security inspection in an unbiased and complete manner, we develop a comprehensive methodology and a set of evaluation metrics to manage all potential combinations of attack methods, datasets (application scenarios), categories of deep compressive sensing models, and image classifiers. The results demonstrate that deep compressive sensing models unknown to adversaries can protect the computer vision system from adversarial example attacks and poisoning attacks, whereas the ones exposed to adversaries can cause the system to become more vulnerable.
{"title":"Evaluating Compressive Sensing on the Security of Computer Vision Systems","authors":"Yushi Cheng, Boyang Zhou, Yanjiao Chen, Yi-Chao Chen, Xiaoyu Ji, Wenyuan Xu","doi":"10.1145/3645093","DOIUrl":"https://doi.org/10.1145/3645093","url":null,"abstract":"<p>The rising demand for utilizing fine-grained data in deep-learning (DL) based intelligent systems presents challenges for the collection and transmission abilities of real-world devices. Deep compressive sensing, which employs deep learning algorithms to compress signals at the sensing stage and reconstruct them with high quality at the receiving stage, provides a state-of-the-art solution for the problem of large-scale fine-grained data. However, recent works have proven that fatal security flaws exist in current deep learning methods and such instability is universal for DL-based image reconstruction methods. In this paper, we assess the security risks introduced by deep compressive sensing in the widely-used computer vision system in the face of adversarial example attacks and poisoning attacks. To implement the security inspection in an unbiased and complete manner, we develop a comprehensive methodology and a set of evaluation metrics to manage all potential combinations of attack methods, datasets (application scenarios), categories of deep compressive sensing models, and image classifiers. The results demonstrate that deep compressive sensing models unknown to adversaries can protect the computer vision system from adversarial example attacks and poisoning attacks, whereas the ones exposed to adversaries can cause the system to become more vulnerable.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"104 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zichuan Xu, Haiyang Qiao, Weifa Liang, Zhou Xu, Qiufen Xia, Pan Zhou, Omer F. Rana, Wenzheng Xu
Unmanned Aerial Vehicle (UAV) has gained increasing attentions by both academic and industrial communities, due to its flexible deployment and efficient line-of-sight communication. Recently, UAVs equipped with base stations have been envisioned as a key technology to provide 5G network services for mobile users. In this paper, we provide timely services on the data streams of mobile users in a UAV-aided Mobile Edge Computing (MEC) network, in which each UAV is equipped with a 5G small-cell base station for communication and data processing. Specifically, we first formulate a flow-time minimization problem by jointly caching services and offloading tasks of mobile users to the UAV-aided MEC with the aim to minimize the flow-time, where the flow-time of a user request is referred to the time duration from the request issuing time point to its completion point, subject to resource and energy capacity on each UAV. We then propose a spatial-temporal learning optimization framework. We also devise an online algorithm with a competitive ratio for the problem based upon the framework, by leveraging the round-robin scheduling and dual fitting techniques. Finally, we evaluate the performance of the proposed algorithms through experimental simulation. The simulation results demonstrated that the proposed algorithms outperform their comparison counterparts, by reducing the flow-time no less than 19% on average.
{"title":"Flow-Time Minimization for Timely Data Stream Processing in UAV-Aided Mobile Edge Computing","authors":"Zichuan Xu, Haiyang Qiao, Weifa Liang, Zhou Xu, Qiufen Xia, Pan Zhou, Omer F. Rana, Wenzheng Xu","doi":"10.1145/3643813","DOIUrl":"https://doi.org/10.1145/3643813","url":null,"abstract":"<p>Unmanned Aerial Vehicle (UAV) has gained increasing attentions by both academic and industrial communities, due to its flexible deployment and efficient line-of-sight communication. Recently, UAVs equipped with base stations have been envisioned as a key technology to provide 5G network services for mobile users. In this paper, we provide timely services on the data streams of mobile users in a UAV-aided Mobile Edge Computing (MEC) network, in which each UAV is equipped with a 5G small-cell base station for communication and data processing. Specifically, we first formulate a flow-time minimization problem by jointly caching services and offloading tasks of mobile users to the UAV-aided MEC with the aim to minimize the flow-time, where the flow-time of a user request is referred to the time duration from the request issuing time point to its completion point, subject to resource and energy capacity on each UAV. We then propose a spatial-temporal learning optimization framework. We also devise an online algorithm with a competitive ratio for the problem based upon the framework, by leveraging the round-robin scheduling and dual fitting techniques. Finally, we evaluate the performance of the proposed algorithms through experimental simulation. The simulation results demonstrated that the proposed algorithms outperform their comparison counterparts, by reducing the flow-time no less than 19% on average.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"2 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139668926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) has revolutionized the connectivity of diverse sensing devices, generating an enormous volume of data. However, applying machine learning algorithms to sensing devices presents substantial challenges due to resource constraints and privacy concerns. Federated learning (FL) emerges as a promising solution allowing for training models in a distributed manner while preserving data privacy on client devices. We contribute SAFI, a semi-asynchronous FL approach based on clustering to achieve a novel in-cluster synchronous and out-cluster asynchronous FL training mode. Specifically, we propose a three-tier architecture to enable IoT data processing on edge devices and design a clustering selection module to effectively group heterogeneous edge devices based on their processing capacities. The performance of SAFI has been extensively evaluated through experiments conducted on a real-world testbed. As the heterogeneity of edge devices increases, SAFI surpasses the baselines in terms of the convergence time, achieving a speedup of approximately × 3 when the heterogeneity ratio is 7:1. Moreover, SAFI demonstrates favorable performance in non-IID settings and requires lower communication cost compared to FedAsync. Notably, SAFI is the first Java-implemented FL approach and holds significant promise to serve as an efficient FL algorithm in IoT environments.
{"title":"Behave Differently when Clustering: a Semi-Asynchronous Federated Learning Approach for IoT","authors":"Boyu Fan, Xiang Su, Sasu Tarkoma, Pan Hui","doi":"10.1145/3639825","DOIUrl":"https://doi.org/10.1145/3639825","url":null,"abstract":"<p>The Internet of Things (IoT) has revolutionized the connectivity of diverse sensing devices, generating an enormous volume of data. However, applying machine learning algorithms to sensing devices presents substantial challenges due to resource constraints and privacy concerns. Federated learning (FL) emerges as a promising solution allowing for training models in a distributed manner while preserving data privacy on client devices. We contribute <i>SAFI</i>, a semi-asynchronous FL approach based on clustering to achieve a novel in-cluster synchronous and out-cluster asynchronous FL training mode. Specifically, we propose a three-tier architecture to enable IoT data processing on edge devices and design a clustering selection module to effectively group heterogeneous edge devices based on their processing capacities. The performance of <i>SAFI</i> has been extensively evaluated through experiments conducted on a real-world testbed. As the heterogeneity of edge devices increases, <i>SAFI</i> surpasses the baselines in terms of the convergence time, achieving a speedup of approximately × 3 when the heterogeneity ratio is 7:1. Moreover, <i>SAFI</i> demonstrates favorable performance in non-IID settings and requires lower communication cost compared to FedAsync. Notably, <i>SAFI</i> is the first Java-implemented FL approach and holds significant promise to serve as an efficient FL algorithm in IoT environments.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"16 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139558910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Application-layer distributed denial-of-service (DDoS) attacks incapacitate systems by using up their resources, causing service interruptions, financial losses, and more. Consequently, advanced deep-learning techniques are used to detect and mitigate these attacks in cloud infrastructures. However, in mobile edge computing (MEC), it becomes economically impractical to equip each node with defensive resources, as these resources may largely remain unused in edge devices. Furthermore, current methods are mainly concentrated on improving the accuracy of DDoS attack detection and saving CPU resources, neglecting the effective allocation of computational power for benign tasks under DDoS attacks. To address these issues, this paper introduces SecEG, a secure and efficient strategy against DDoS attacks for MEC that integrates container-based task isolation with lightweight online anomaly detection on edge nodes. More specifically, a new model is proposed to analyze resource contention dynamics between DDoS attacks and benign tasks. Subsequently, by employing periodic packet sampling and real-time attack intensity predicting, an autoencoder-based method is proposed to detect DDoS attacks. We leverage an efficient scheduling method to optimize the edge resource allocation and the service quality for benign users during DDoS attacks. When executed in the real-world edge environment, our experimental findings validate the efficacy of the proposed SecEG strategy. Compared to conventional methods, the service rate of benign requests increases by 23% under intense DDoS attacks, and the CPU resource is saved up to 35%.
{"title":"SecEG: A Secure and Efficient Strategy against DDoS Attacks in Mobile Edge Computing","authors":"Haiyang Huang, Tianhui Meng, Jianxiong Guo, Xuekai Wei, Weijia Jia","doi":"10.1145/3641106","DOIUrl":"https://doi.org/10.1145/3641106","url":null,"abstract":"<p>Application-layer distributed denial-of-service (DDoS) attacks incapacitate systems by using up their resources, causing service interruptions, financial losses, and more. Consequently, advanced deep-learning techniques are used to detect and mitigate these attacks in cloud infrastructures. However, in mobile edge computing (MEC), it becomes economically impractical to equip each node with defensive resources, as these resources may largely remain unused in edge devices. Furthermore, current methods are mainly concentrated on improving the accuracy of DDoS attack detection and saving CPU resources, neglecting the effective allocation of computational power for benign tasks under DDoS attacks. To address these issues, this paper introduces SecEG, a secure and efficient strategy against DDoS attacks for MEC that integrates container-based task isolation with lightweight online anomaly detection on edge nodes. More specifically, a new model is proposed to analyze resource contention dynamics between DDoS attacks and benign tasks. Subsequently, by employing periodic packet sampling and real-time attack intensity predicting, an autoencoder-based method is proposed to detect DDoS attacks. We leverage an efficient scheduling method to optimize the edge resource allocation and the service quality for benign users during DDoS attacks. When executed in the real-world edge environment, our experimental findings validate the efficacy of the proposed SecEG strategy. Compared to conventional methods, the service rate of benign requests increases by 23% under intense DDoS attacks, and the CPU resource is saved up to 35%.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"10 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ravi Raj Saxena, Joydeep Pal, Srinivasan Iyengar, Bhawana Chhaglani, Anurag Ghosh, Venkata N. Padmanabhan, Prabhakar T. Venkata
Drones represent a significant technological shift at the convergence of on-demand cyber-physical systems and edge intelligence. However, realizing their full potential necessitates managing the limited energy resources carefully. Prior work looks at factors such as battery characteristics, intelligent edge sensing considerations, planning and robustness in isolation. But a global view of energy awareness that considers these factors and looks at various tradeoffs is essential. To this end, we present results from our detailed empirical study of battery charge-discharge characteristics and the impact of altitude and lighting on edge inference accuracy. Our energy models, derived from these observations, predict energy usage while performing various manoeuvres with an error of 5.6%, a 2.5X improvement over the state-of-the-art. Furthermore, we propose a holistic energy-aware multi-drone scheduling system that decreases the energy consumed by 21.14% and the mission times by 46.91% over state-of-the-art baselines. To achieve system robustness in the event of link or drone failure, we observe trends in Packet Delivery Ratio to propose a methodology to establish reliable communication between nodes. We release an open-source implementation of our system. Finally, we tie all of these pieces together using a people-counting case study.
{"title":"Holistic Energy Awareness and Robustness for Intelligent Drones","authors":"Ravi Raj Saxena, Joydeep Pal, Srinivasan Iyengar, Bhawana Chhaglani, Anurag Ghosh, Venkata N. Padmanabhan, Prabhakar T. Venkata","doi":"10.1145/3641855","DOIUrl":"https://doi.org/10.1145/3641855","url":null,"abstract":"<p>Drones represent a significant technological shift at the convergence of on-demand cyber-physical systems and edge intelligence. However, realizing their full potential necessitates managing the limited energy resources carefully. Prior work looks at factors such as battery characteristics, intelligent edge sensing considerations, planning and robustness in isolation. But a global view of energy awareness that considers these factors and looks at various tradeoffs is essential. To this end, we present results from our detailed empirical study of battery charge-discharge characteristics and the impact of altitude and lighting on edge inference accuracy. Our energy models, derived from these observations, predict energy usage while performing various manoeuvres with an error of 5.6%, a 2.5X improvement over the state-of-the-art. Furthermore, we propose a holistic energy-aware multi-drone scheduling system that decreases the energy consumed by 21.14% and the mission times by 46.91% over state-of-the-art baselines. To achieve system robustness in the event of link or drone failure, we observe trends in Packet Delivery Ratio to propose a methodology to establish reliable communication between nodes. We release an open-source implementation of our system. Finally, we tie all of these pieces together using a people-counting case study.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"116 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual target detection based on deep learning with high computing power devices has been successful, but the performance in intelligent agriculture with edge devices has not been prominent. Specifically, the existing model architecture and optimization methods are not well-suited to low-power edge devices, the agricultural tasks such as weed detection require high accuracy, short inference latency, and low cost. Although there are automated tuning methods available, the search space is extremely large, using existing models for compression and optimization greatly wastes tuning resources. In this article, we propose a lightweight PAM-FOG net based on weed distribution and projection mapping. More significantly, we propose a novel model compression optimization method to fit our model. Compared with other models, PAM-FOG net runs on smart weeding robots supported by edge devices, and achieves superior accuracy and high frame rate. We effectively balance model size, performance and inference speed, reducing the original model size by nearly 50%, power consumption by 26%, and improving the frame rate by 40%. It shows the effectiveness of our model architecture and optimization method, which provides a reference for the future development of deep learning in intelligent agriculture.
{"title":"PAM-FOG Net: A Lightweight Weed Detection Model Deployed on Smart Weeding Robots","authors":"Jiahua Bao, Siyao Cheng, Jie Liu","doi":"10.1145/3641821","DOIUrl":"https://doi.org/10.1145/3641821","url":null,"abstract":"<p>Visual target detection based on deep learning with high computing power devices has been successful, but the performance in intelligent agriculture with edge devices has not been prominent. Specifically, the existing model architecture and optimization methods are not well-suited to low-power edge devices, the agricultural tasks such as weed detection require high accuracy, short inference latency, and low cost. Although there are automated tuning methods available, the search space is extremely large, using existing models for compression and optimization greatly wastes tuning resources. In this article, we propose a lightweight PAM-FOG net based on weed distribution and projection mapping. More significantly, we propose a novel model compression optimization method to fit our model. Compared with other models, PAM-FOG net runs on smart weeding robots supported by edge devices, and achieves superior accuracy and high frame rate. We effectively balance model size, performance and inference speed, reducing the original model size by nearly 50%, power consumption by 26%, and improving the frame rate by 40%. It shows the effectiveness of our model architecture and optimization method, which provides a reference for the future development of deep learning in intelligent agriculture.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"15 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139515383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}