Data minimization is a legal principle that mandates limiting the collection of personal data to a necessary minimum. In this context, we address ourselves to pervasive mobile-to-mobile recommender systems in which users establish ad hoc wireless connections between their mobile computing devices in physical proximity to exchange ratings that represent personal data on which they calculate recommendations. The specific problem is: How can users minimize the collection of ratings over all users while only being able to communicate with a subset of other users in physical proximity? A main difficulty is the mobility of users, which prevents, for instance, the creation and use of an overlay network to coordinate data collection. Users, therefore, have to decide whether to exchange ratings and how many when an ad hoc wireless connection is established. We model the randomness of these connections and apply an algorithm based on distributed gradient descent to solve the distributed data minimization problem at hand. We show that the algorithm robustly produces the least amount of connections and also the least amount of collected ratings compared to an array of baselines. We find that this simultaneously reduces the chances of an attacker relating users to ratings. In this sense, the algorithm also preserves the anonymity of users, yet only of those users who do not establish an ad hoc wireless connection with each other. Users who do establish a connection with each other are trivially not anonymous toward each other. We find that users can further minimize data collection and preserve their anonymity if they aggregate multiple ratings on the same item into a single rating and change their identifiers between connections.
Contemporary applications leverage machine learning models to optimize performance, often necessitating data transmission to a remote server for training. However, this approach entails significant resource consumption. A privacy concern arises, which Federated Learning addresses through a cyclical process involving in-device training (local model update) and subsequent reporting to the server for aggregation (global model update). In each iteration of this cycle, termed a communication round, a client selection component determines participant devices contributing to global model enhancement. However, existing literature inadequately addresses scenarios where optimized energy consumption is imperative. This paper introduces an Energy Saving Client Selection (ESCS) mechanism, considering decision criteria such as battery level, training time capacity, and network quality. As a pertinent use case, classification scenarios are utilized to compare the performance of ESCS against other state-of-the-art approaches. The findings reveal that ESCS effectively conserves energy while maintaining optimal performance. This research contributes to the ongoing discourse on energy-efficient client selection strategies within the domain of Federated Learning.
Detecting risky driving has been a significant area of focus in recent years. Nonetheless, devising a practical, effective, and unobtrusive solution remains a complex challenge. Presently available technologies predominantly rely on visual cues or physical proximity, complicating the sensing. With this incentive, we explore the possibility of utilizing mmWave radars exclusively to identify dangerous driving behaviors. Initially, we scrutinize the attributes of unsafe driving and pinpoint distinct patterns in range-doppler readings brought about by nine common risky driving manoeuvres. Subsequently, we create an innovative Fused-CNN model that identifies instances of hazardous driving amidst regular driving and categorizes nine distinct types of dangerous driving actions. After conducting thorough experiments involving seven volunteers driving in real-world settings, we note that our system accurately distinguishes risky driving actions with an average precision of approximately 97% with a deviation of . To underscore the significance of our approach, we also compare it against established state-of-the-art methods.
With the development of passive sensing technology, WiFi-based identification research has attracted much attention in areas such as human–computer interaction and home security. Although WiFi sensing-based human identification has achieved initial success, it is currently mainly applicable to scenarios where the user’s identity category is fixed and not applicable to scenarios where the user’s identity category changes frequently. In this paper, we propose an identification system (CIU-L) in a scenario where user’s identity categories frequently change, allowing for incremental registration and unregistration of identity categories. To the best of our knowledge, this is the first attempt to register and unregister user identity information under the previous identity category constraints. CIU-L proposes a training and updating strategy in the registration phase of new user to avoid catastrophic forgetting of old user’s identity information, and trains a targeted noise for the user to be unregistered in the unregistration phase of old user, achieving precise removal of the user to be unregistered without affecting the retained users. In addition, this paper presents adequate comparative experiments of CIU-L with other systems in the user identity category fixing scenario. The experimental results show that the average difference between CIU-L and other systems in terms of Accuracy, Precision, Recall and F1-Score is within 5% of each other, while running time and storage space are saved by more than 6 times, which is more capable of meeting the needs of identity recognition in real scenarios.
Bluetooth Low Energy (BLE) has emerged as one of the reference technologies for the development of indoor localization systems, due to its increasing ubiquity, low-cost hardware, and to the introduction of direction-finding enhancements improving its ranging performance. However, the intrinsic narrowband nature of BLE makes this technology susceptible to multipath and channel interference. As a result, it is still challenging to achieve decimetre-level localization accuracy, which is necessary when developing location-based services for smart factories and workspaces. To address this challenge, we present BmmW, an indoor localization system that augments the ranging estimates obtained with BLE 5.1’s constant tone extension feature with mmWave radar measurements to provide 3D localization of a mobile tag with decimetre-level accuracy. Specifically, BmmW embeds a deep neural network (DNN) that is jointly trained with both BLE and mmWave measurements, practically leveraging the strengths of both technologies. In fact, mmWave radars can locate objects and people with decimetre-level accuracy, but their effectiveness in monitoring stationary targets and multiple objects is limited, and they also suffer from a fast signal attenuation limiting the usable range to a few meters. We evaluate BmmW’s performance experimentally, and show that its joint DNN training scheme allows to track mobile tags with a mean 3D localization accuracy of 10 cm when combining angle-of-arrival BLE measurements with mmWave radar data. We further assess two variations of BmmW: BmmW-Lite and BmmW-Lite+, both tailored for single-antenna BLE devices, which eliminates the necessity for bulky and expensive multi-antenna arrays and represents a cost-effective solution that is easy to integrate into compact IoT devices. In contrast to classic BmmW (which utilizes angle-of-arrival info), BmmW-Lite uses raw in-phase/quadrature (I/Q) measurements, and achieves a mean localization accuracy of 36 cm, thus facilitating precise object tracking in indoor environments even when using budget-friendly single-antenna BLE devices. BmmW-Lite+ extends BmmW-Lite by allowing the localization task to be transferred from the edge to the cloud due to device memory and power constraints. To this end, BmmW-Lite+ employs a goal-oriented communication paradigm that compresses initial BLE features into a more compact semantic format at the edge device, which allows to minimize the amount of data that needs to be sent to the cloud. Our experimental results show that BmmW-Lite+ can compress raw BLE features by up to 12% of their initial size (hence allowing to save network bandwidth and minimize energy consumption), with negligible impact on the localization accuracy.
Quantum Key Distribution (QKD) holds the promise of a secure exchange of cryptographic material between applications that have access to the same network of QKD nodes, interconnected through fiber optic or satellite links. Worldwide several such networks are being deployed at a metropolitan level, where edge computing is already offered by the telco operators to customers as a viable alternative to both cloud and on-premise hosting of computational resources. In this paper, we investigate the implications of enabling QKD for edge-native applications from a practical perspective of resource allocation in the QKD network and the edge infrastructure. Specifically, we consider the dichotomy between aggregating all the applications on the same source–destination path vs. adopting a more flexible micro-flow approach, inspired from Software Defined Networking (SDN) concepts. Our simulation results show that there is a fundamental trade-off between the efficient use of resources and the signaling overhead, which we managed to diminish with the use of suitable hybrid solutions.
Range-based localization has received considerable attention in wireless sensor networks due to its ability to efficiently locate the unknown source of a signal. However, the localization accuracy with a single set of measurements may be inadequate, especially in dynamic and noisy environments. To mitigate this problem, received signal strength difference (RSSD) and time difference of arrival (TDOA) measurements are used to develop an efficient estimator to reduce the bias and improve localization accuracy. First, the RSSD/TDOA-based maximum likelihood (ML) localization problem is transformed into a hybrid information nonnegative constrained least squares (HI-NCLS) framework. Then, this framework is used to develop an effective bias-reduction localization approach (BRLA) with a two-step linearization process. The first step employs a linear solving method (LSM) which exploits an active set method to obtain a sub-optimal estimator. The second step uses a bias reduction method (BRM) to mitigate the correlation from linearization and a weighted instrumental variables matrix (IVM) which is weakly correlated with the noise but strongly correlated with the data matrix (DM) is used in place of the DM. Performance results are presented which demonstrate that the proposed BRLA provides better localization performance than state-of-the-art methods in the literature.
Traffic count (or link count) data represents the cumulative traffic in the lanes between two consecutive signalised intersections. Typically, dedicated infrastructure-based sensors are required for link count data collection. The lack of adequate data collection infrastructure leads to lack of link count data for numerous cities, particularly those in low- and middle-income countries. Here, we address the research problem of link count estimation using crowd-sourced trajectory data to reduce the reliance on any dedicated infrastructure. A stochastic queue discharge model is developed to estimate link counts at signalised intersections taking into account the sparsity and low penetration rate (i.e., the percentage of vehicles with known trajectory) brought on by crowdsourcing. The issue of poor penetration rate is tackled by constructing synthetic trajectories entirely from known trajectories. The proposed model further provides a methodology for estimating the delay resulting from the start-up loss time of the vehicles in the queue under unknown traffic conditions. The proposed model is implemented and validated with real-world data at a signalised intersection in Kolkata, India. Validation results demonstrate that the model can estimate link count with an average accuracy score of 82% with a very low penetration rate (not in the city, but at the intersection) of 5.09% in unknown traffic conditions, which is yet to be accomplished in the current state-of-the-art.
The popularization of Fog Computing has provided the foundation for a computational environment better suited to applications demanding low communication latency. However, Fog environments has limited resources and restricted coverage areas, besides the user mobility that needs continuous migrations to maintain accessible and nearby content. To enable applications to harness the low latency offered by Fog, it is crucial to develop migration strategies capable of addressing the complexities of the Fog environment while ensuring content availability regardless of user location. This work proposes CMFog, a proactive content migration strategy that leverages mobility prediction in a multi-level fog. Our results show that CMFog is able to provide enhanced flexibility in the migration decision process across a wide diversity of scenario.