Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843486
Tomás Lagos Jenschke, M. Amorim, S. Fdida
The enormous success of direct communication applications has shed light on the practical interest of Device-to-device (D2D) communications. However, to set up a direct link between two neighboring nodes, they have first to detect each other, which introduces a delay before they can start sending and receiving data. The link establishment delay can be particularly unfavorable in situations of strong mobility, as the availability of the direct communication link depends on how long the devices stay within communication range of each other. This paper reports on our experiments to evaluate the link establishment delay. We focus on Android devices and use the Nearby Connection Application Programming Interface (API), which supports Bluetooth Classic and Bluetooth Low Energy (BLE) to perform link connectivity. In a nutshell, we observe that the link establishment delay requires several seconds to complete in the case of Bluetooth Classic and even tens of seconds for BLE.
{"title":"Quantifying Direct Link Establishment Delay Between Android Devices","authors":"Tomás Lagos Jenschke, M. Amorim, S. Fdida","doi":"10.1109/LCN53696.2022.9843486","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843486","url":null,"abstract":"The enormous success of direct communication applications has shed light on the practical interest of Device-to-device (D2D) communications. However, to set up a direct link between two neighboring nodes, they have first to detect each other, which introduces a delay before they can start sending and receiving data. The link establishment delay can be particularly unfavorable in situations of strong mobility, as the availability of the direct communication link depends on how long the devices stay within communication range of each other. This paper reports on our experiments to evaluate the link establishment delay. We focus on Android devices and use the Nearby Connection Application Programming Interface (API), which supports Bluetooth Classic and Bluetooth Low Energy (BLE) to perform link connectivity. In a nutshell, we observe that the link establishment delay requires several seconds to complete in the case of Bluetooth Classic and even tens of seconds for BLE.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124265572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843405
Mhd Saria Allahham, Amr Mohamed, H. Hassanein
Mobile Edge Learning (MEL) is a learning paradigm that facilitates training of Machine Learning (ML) models over resource-constrained edge devices. MEL consists of an orchestrator, which represents the model owner of the learning task, and learners, which own the data locally. Enabling the learning process requires the model owner to motivate learners to train the ML model on their local data and allocate sufficient resources. The time limitations and the possible existence of multiple orchestrators open the doors for the resource allocation problem. As such, we model the incentive mechanism and resource allocation as a multi-round Stackelberg game, and propose a Payment-based Time Allocation (PBTA) algorithm to solve the game. In PBTA, orchestrators first determine the pricing, then the learners allocate each orchestrator a timeslot and determine the amount of data and resources for each orchestrator. Finally, we evaluate the PBTA performance and compare it against a recent state-of-the-art approach.
{"title":"Incentive-based Resource Allocation for Mobile Edge Learning","authors":"Mhd Saria Allahham, Amr Mohamed, H. Hassanein","doi":"10.1109/LCN53696.2022.9843405","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843405","url":null,"abstract":"Mobile Edge Learning (MEL) is a learning paradigm that facilitates training of Machine Learning (ML) models over resource-constrained edge devices. MEL consists of an orchestrator, which represents the model owner of the learning task, and learners, which own the data locally. Enabling the learning process requires the model owner to motivate learners to train the ML model on their local data and allocate sufficient resources. The time limitations and the possible existence of multiple orchestrators open the doors for the resource allocation problem. As such, we model the incentive mechanism and resource allocation as a multi-round Stackelberg game, and propose a Payment-based Time Allocation (PBTA) algorithm to solve the game. In PBTA, orchestrators first determine the pricing, then the learners allocate each orchestrator a timeslot and determine the amount of data and resources for each orchestrator. Finally, we evaluate the PBTA performance and compare it against a recent state-of-the-art approach.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130314043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843374
Q. Stokkink, C. Ileri, J. Pouwelse
Web3 networks are emerging to replace centrally-governed networking infrastructure. The integrity of the shared public infrastructure of Web3 networks is guaranteed through data sharing between nodes. However, due to the unstructured and highly partitioned nature of Web3 networks, data sharing between nodes in different partitions is a challenging task. In this paper we present the TSRP mechanism, which approaches the data sharing problem through nodes auditing each other to enforce carrying of data between partitions. Reputation is used as an analogue for the likelihood of nodes interacting with nodes from other partitions in the future. The number of copies of data shared with other nodes is inversely related to the nodes’ reputation. We use a real-world trace of Twitter to show how our implementation can converge to an equal number of copies as structured approaches.
{"title":"Reputation-Based Data Carrying for Web3 Networks","authors":"Q. Stokkink, C. Ileri, J. Pouwelse","doi":"10.1109/LCN53696.2022.9843374","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843374","url":null,"abstract":"Web3 networks are emerging to replace centrally-governed networking infrastructure. The integrity of the shared public infrastructure of Web3 networks is guaranteed through data sharing between nodes. However, due to the unstructured and highly partitioned nature of Web3 networks, data sharing between nodes in different partitions is a challenging task. In this paper we present the TSRP mechanism, which approaches the data sharing problem through nodes auditing each other to enforce carrying of data between partitions. Reputation is used as an analogue for the likelihood of nodes interacting with nodes from other partitions in the future. The number of copies of data shared with other nodes is inversely related to the nodes’ reputation. We use a real-world trace of Twitter to show how our implementation can converge to an equal number of copies as structured approaches.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130652800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843272
Minji Kim, Dongeun Lee, Kookjin Lee, Doo-Chan Kim, Sangman Lee, Jinoh Kim
The packet stream analysis is essential for the early identification of attack connections while in progress, enabling timely responses to protect system resources. However, there are several challenges for implementing effective analysis, including out-of-order packet sequences introduced due to network dynamics and class imbalance with a small fraction of attack connections available to characterize. To overcome these challenges, we present two deep sequence models: (i) a bidirectional recurrent structure designed for resilience to out-of-order packets, and (ii) a pre-training-enabled sequence-to-sequence structure designed for better dealing with unbalanced class distributions using self-supervised learning. We evaluate the presented models using a real network dataset created from month-long real traffic traces collected from backbone links with the associated intrusion log. The experimental results support the feasibility of the presented models with up to 94.8% in F1 score with the first five packets (k=5), outperforming baseline deep learning models.
{"title":"Deep Sequence Models for Packet Stream Analysis and Early Decisions","authors":"Minji Kim, Dongeun Lee, Kookjin Lee, Doo-Chan Kim, Sangman Lee, Jinoh Kim","doi":"10.1109/LCN53696.2022.9843272","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843272","url":null,"abstract":"The packet stream analysis is essential for the early identification of attack connections while in progress, enabling timely responses to protect system resources. However, there are several challenges for implementing effective analysis, including out-of-order packet sequences introduced due to network dynamics and class imbalance with a small fraction of attack connections available to characterize. To overcome these challenges, we present two deep sequence models: (i) a bidirectional recurrent structure designed for resilience to out-of-order packets, and (ii) a pre-training-enabled sequence-to-sequence structure designed for better dealing with unbalanced class distributions using self-supervised learning. We evaluate the presented models using a real network dataset created from month-long real traffic traces collected from backbone links with the associated intrusion log. The experimental results support the feasibility of the presented models with up to 94.8% in F1 score with the first five packets (k=5), outperforming baseline deep learning models.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131473842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843513
Quentin Vey, R. Dalcé, A. Bossche, T. Val
This paper introduces the Localisation and UWB-Based Ranging testbed for the Internet of Things (LocURa4IoT). This platform has been built to aid in the design and performance evaluation of proposals addressing the issue of indoor localization. The paper presents the experiment submission process and describes the demonstrations of the testbed capabilities and characteristics. The same scenario has been executed beforehand and the resulting dataset is made available online to the community.
{"title":"Indoor UWB localisation: LocURa4IoT testbed and dataset presentation","authors":"Quentin Vey, R. Dalcé, A. Bossche, T. Val","doi":"10.1109/LCN53696.2022.9843513","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843513","url":null,"abstract":"This paper introduces the Localisation and UWB-Based Ranging testbed for the Internet of Things (LocURa4IoT). This platform has been built to aid in the design and performance evaluation of proposals addressing the issue of indoor localization. The paper presents the experiment submission process and describes the demonstrations of the testbed capabilities and characteristics. The same scenario has been executed beforehand and the resulting dataset is made available online to the community.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132457992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843811
Youmna Nasser, Wafa Badreddine
Wireless Body Area Network (WBAN) have lately attracted many researchers as a relatively new phenomenon that mainly emerged with the development of the wireless communication technologies and sensor devices to be able to fit in a person’s body. WBAN’s primary concerns range from energy efficient communication to designing delays efficient protocols that face human body mobility. In this work, we propose an efficient converge-cast, i.e data collection, protocol, e-CLDC. Our protocol is based on a cross-layer approach that involves Physical, MAC and Network layers. e-CLDC implements a multi-hop multi-path strategy to increase reliability and energy efficiency at network layer. A scheduling mechanism is also adopted at the MAC layer to avoid collisions and overhearing. In addition, sensor nodes adjust dynamically their transmission power and adapt it regarding the body posture at physical layer. e-CLDC achieves an average of 99% reliability for different body postures. Our protocol performances are compared to a one-hop strategy. The latter achieves only 64% and thus with a high sensors transmission power.
无线体域网络(Wireless Body Area Network, WBAN)是近年来随着无线通信技术和传感器装置的发展而出现的一种相对较新的现象,引起了人们的广泛关注。WBAN的主要关注范围从节能通信到设计面向人体移动性的延迟高效协议。在这项工作中,我们提出了一个有效的收敛转换,即数据收集,协议,e-CLDC。我们的协议是基于一个涉及物理层、MAC层和网络层的跨层方法。e-CLDC采用多跳多路径策略,提高了网络层的可靠性和能效。在MAC层还采用了调度机制,以避免冲突和偷听。此外,传感器节点根据身体在物理层的姿态动态调整自身的传输功率。e-CLDC在不同身体姿势下的平均信度达到99%。将我们的协议性能与单跳策略进行比较。后者仅达到64%,因此具有较高的传感器传输功率。
{"title":"e-CLDC: Efficient Cross-Layer protocol for Data Collection in WBAN for remote patient monitoring","authors":"Youmna Nasser, Wafa Badreddine","doi":"10.1109/LCN53696.2022.9843811","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843811","url":null,"abstract":"Wireless Body Area Network (WBAN) have lately attracted many researchers as a relatively new phenomenon that mainly emerged with the development of the wireless communication technologies and sensor devices to be able to fit in a person’s body. WBAN’s primary concerns range from energy efficient communication to designing delays efficient protocols that face human body mobility. In this work, we propose an efficient converge-cast, i.e data collection, protocol, e-CLDC. Our protocol is based on a cross-layer approach that involves Physical, MAC and Network layers. e-CLDC implements a multi-hop multi-path strategy to increase reliability and energy efficiency at network layer. A scheduling mechanism is also adopted at the MAC layer to avoid collisions and overhearing. In addition, sensor nodes adjust dynamically their transmission power and adapt it regarding the body posture at physical layer. e-CLDC achieves an average of 99% reliability for different body postures. Our protocol performances are compared to a one-hop strategy. The latter achieves only 64% and thus with a high sensors transmission power.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126380124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843426
Patrick Lampe, Markus Sommer, Artur Sterz, Jonas Hochst, Christian Uhl, Bernd Freisleben
A network for environmental monitoring typically requires a large number of sensors. If a longer service life is intended, it is essential that the deployed sensor systems can be upgraded without modifying hardware. Often, these networks rely on proprietary hardware/software components tailored to the desired functionality, but these could technically also be used for other applications. We present a demo of mechanism interception, a novel approach to unobtrusively add or modify the functionality of an existing networked system, in our case a TreeTalker, without touching any proprietary components. We demonstrate how a cloud infrastructure can be unobtrusively replaced by an edge infrastructure in a wireless sensor network. Our results indicate that mechanism interception is a compelling approach for our scenario to provide previously unavailable functionality without modifying existing components.
{"title":"ForestEdge: Unobtrusive Mechanism Interception in Environmental Monitoring","authors":"Patrick Lampe, Markus Sommer, Artur Sterz, Jonas Hochst, Christian Uhl, Bernd Freisleben","doi":"10.1109/LCN53696.2022.9843426","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843426","url":null,"abstract":"A network for environmental monitoring typically requires a large number of sensors. If a longer service life is intended, it is essential that the deployed sensor systems can be upgraded without modifying hardware. Often, these networks rely on proprietary hardware/software components tailored to the desired functionality, but these could technically also be used for other applications. We present a demo of mechanism interception, a novel approach to unobtrusively add or modify the functionality of an existing networked system, in our case a TreeTalker, without touching any proprietary components. We demonstrate how a cloud infrastructure can be unobtrusively replaced by an edge infrastructure in a wireless sensor network. Our results indicate that mechanism interception is a compelling approach for our scenario to provide previously unavailable functionality without modifying existing components.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125766990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843460
Ahmed Badr, A. Rashwan, Khalid Elgazzar
This paper explores ways for energy consumption reduction in wearable and Remote Patient Monitoring (RPM) devices. We use the XBeats ECG patch as a case study application for remote Electrocardiogram (ECG) wearable device power consumption benchmarking. Systematic energy consumption profiling criteria is proposed for evaluating participating components in an RPM device. We isolate each hardware component to find power-intensive processes in the XBeats system, discover energy consumption patterns, and measure voltage, current, power, and energy consumption for a given time period. The proposed optimization techniques demonstrate significant improvements to the hardware components on the ECG patch. The results show that optimizing the data acquisition process saves 8.2% compared to the original power consumption and 1.62% in data transmission over BLE, thus extending the device lifetime. Lastly, we optimize the data logging operation to save 54% of data initially written to an external drive.
{"title":"An Application-Specific Power Consumption Optimization for Wearable Electrocardiogram Devices","authors":"Ahmed Badr, A. Rashwan, Khalid Elgazzar","doi":"10.1109/LCN53696.2022.9843460","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843460","url":null,"abstract":"This paper explores ways for energy consumption reduction in wearable and Remote Patient Monitoring (RPM) devices. We use the XBeats ECG patch as a case study application for remote Electrocardiogram (ECG) wearable device power consumption benchmarking. Systematic energy consumption profiling criteria is proposed for evaluating participating components in an RPM device. We isolate each hardware component to find power-intensive processes in the XBeats system, discover energy consumption patterns, and measure voltage, current, power, and energy consumption for a given time period. The proposed optimization techniques demonstrate significant improvements to the hardware components on the ECG patch. The results show that optimizing the data acquisition process saves 8.2% compared to the original power consumption and 1.62% in data transmission over BLE, thus extending the device lifetime. Lastly, we optimize the data logging operation to save 54% of data initially written to an external drive.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121659112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/lcn53696.2022.9843569
{"title":"LCN Sponsors and Supporters","authors":"","doi":"10.1109/lcn53696.2022.9843569","DOIUrl":"https://doi.org/10.1109/lcn53696.2022.9843569","url":null,"abstract":"","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121701925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1109/LCN53696.2022.9843735
Christos Profentzas, M. Almgren, O. Landsiedel
Deep Neural Networks (DNNs) on IoT devices are becoming readily available for classification tasks using sensor data like images and audio. However, DNNs are trained using extensive computational resources such as GPUs on cloud services, and once being quantized and deployed on the IoT device remain unchanged. We argue in this paper, that this approach leads to three disadvantages. First, IoT devices are deployed in real-world scenarios where the initial problem may shift over time (e.g., to new or similar classes), but without re-training, DNNs cannot adapt to such changes. Second, IoT devices need to use energy-preserving communication with limited reliability and network bandwidth, which can delay or restrict the transmission of essential training sensor data to the cloud. Third, collecting and storing training sensor data in the cloud poses privacy concerns. A promising technique to mitigate these concerns is to utilize on-device Transfer Learning (TL). However, bringing TL to resource-constrained devices faces challenges and trade-offs in computational, energy, and memory constraints, which this paper addresses. This paper introduces MicroTL, Transfer Learning (TL) on low-power IoT devices. MicroTL tailors TL to IoT devices without the communication requirement with the cloud. Notably, we found that the MicroTL takes 3x less energy and 2.8x less time than transmitting all data to train an entirely new model in the cloud, showing that it is more efficient to retrain parts of an existing neural network on the IoT device.
{"title":"MicroTL: Transfer Learning on Low-Power IoT Devices","authors":"Christos Profentzas, M. Almgren, O. Landsiedel","doi":"10.1109/LCN53696.2022.9843735","DOIUrl":"https://doi.org/10.1109/LCN53696.2022.9843735","url":null,"abstract":"Deep Neural Networks (DNNs) on IoT devices are becoming readily available for classification tasks using sensor data like images and audio. However, DNNs are trained using extensive computational resources such as GPUs on cloud services, and once being quantized and deployed on the IoT device remain unchanged. We argue in this paper, that this approach leads to three disadvantages. First, IoT devices are deployed in real-world scenarios where the initial problem may shift over time (e.g., to new or similar classes), but without re-training, DNNs cannot adapt to such changes. Second, IoT devices need to use energy-preserving communication with limited reliability and network bandwidth, which can delay or restrict the transmission of essential training sensor data to the cloud. Third, collecting and storing training sensor data in the cloud poses privacy concerns. A promising technique to mitigate these concerns is to utilize on-device Transfer Learning (TL). However, bringing TL to resource-constrained devices faces challenges and trade-offs in computational, energy, and memory constraints, which this paper addresses. This paper introduces MicroTL, Transfer Learning (TL) on low-power IoT devices. MicroTL tailors TL to IoT devices without the communication requirement with the cloud. Notably, we found that the MicroTL takes 3x less energy and 2.8x less time than transmitting all data to train an entirely new model in the cloud, showing that it is more efficient to retrain parts of an existing neural network on the IoT device.","PeriodicalId":303965,"journal":{"name":"2022 IEEE 47th Conference on Local Computer Networks (LCN)","volume":"2010 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114027428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}