Pub Date : 2023-12-18DOI: 10.1016/j.pmcj.2023.101871
Chiara Caiazza , Valerio Luconi , Alessio Vecchio
HTTP is frequently used by smartphones and IoT devices to access information and Web services. Nowadays, HTTP is used in three major versions, each introducing significant changes with respect to the previous one. We evaluated the energy consumption of the major versions of the HTTP protocol when used in the communication between energy-constrained devices and cloud-based or edge-based services. Experimental results show that in a machine-to-machine communication scenario, for the considered client devices – a smartphone and a Single Board Computer – and for a number of cloud/edge services and facilities, HTTP/3 frequently requires more energy than the previous versions of the protocol. The focus of our analysis is on machine-to-machine communication, but to obtain a broader view we also considered a client–server interaction pattern that is more browsing-like. In this case, HTTP/3 can be more energy efficient than the other versions.
{"title":"Energy consumption of smartphones and IoT devices when using different versions of the HTTP protocol","authors":"Chiara Caiazza , Valerio Luconi , Alessio Vecchio","doi":"10.1016/j.pmcj.2023.101871","DOIUrl":"10.1016/j.pmcj.2023.101871","url":null,"abstract":"<div><p>HTTP is frequently used by smartphones and IoT devices to access information and Web services. Nowadays, HTTP is used in three major versions, each introducing significant changes with respect to the previous one. We evaluated the energy consumption of the major versions of the HTTP protocol when used in the communication between energy-constrained devices and cloud-based or edge-based services. Experimental results show that in a machine-to-machine communication scenario, for the considered client devices – a smartphone and a Single Board Computer – and for a number of cloud/edge services and facilities, HTTP/3 frequently requires more energy than the previous versions of the protocol. The focus of our analysis is on machine-to-machine communication, but to obtain a broader view we also considered a client–server interaction pattern that is more browsing-like. In this case, HTTP/3 can be more energy efficient than the other versions.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1574119223001293/pdfft?md5=c73530019c8ccf2d77f8c4830f5951c0&pid=1-s2.0-S1574119223001293-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138742538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Realizing human activity recognition is an important issue in pedestrian navigation and intelligent prosthetic control. Utilizing miniature multi-sensor wearable networks is a reliable method to improve the efficiency and convenience of the recognition system. Effective feature extraction and fusion of multimodal signals is a key issue in recognition. Therefore, this paper proposes an enhanced algorithm based on PCA sensor coupling analysis for data preprocessing. Subsequently, an innovative two-channel convolutional neural network with an SPF feature fusion layer as the core is built. The network fully analyzes the local and global features of multimodal signals using the local contrast and luminance properties of feature images. Compared with traditional methods, the model can reduce the data dimensionality and automatically identify and fuse the key information of the signals. In addition, most of the current mode recognition only supports simple actions such as walking and running, this paper constructs a database containing sixteen states by building a network with inertial sensors (IMU), curvature sensors (FLEX) and electromyography sensors (EMG). The experimental results show that the proposed system exhibits better results in complex action recognition and provides a new scheme for the realization of feature fusion and enhancement.
{"title":"PDCHAR: Human activity recognition via multi-sensor wearable networks using two-channel convolutional neural networks","authors":"Yvxuan Ren, Dandan Zhu, Kai Tong, Lulu Xv, Zhengtai Wang, Lixin Kang, Jinguo Chai","doi":"10.1016/j.pmcj.2023.101868","DOIUrl":"10.1016/j.pmcj.2023.101868","url":null,"abstract":"<div><p>Realizing human activity recognition is an important issue in pedestrian navigation and intelligent prosthetic control. Utilizing miniature multi-sensor wearable networks is a reliable method to improve the efficiency and convenience of the recognition system. Effective feature extraction and fusion of multimodal signals is a key issue in recognition. Therefore, this paper proposes an enhanced algorithm based on PCA sensor coupling analysis for data preprocessing. Subsequently, an innovative two-channel convolutional neural network with an SPF feature fusion layer as the core is built. The network fully analyzes the local and global features of multimodal signals using the local contrast and luminance properties of feature images. Compared with traditional methods, the model can reduce the data dimensionality and automatically identify and fuse the key information of the signals. In addition, most of the current mode recognition only supports simple actions such as walking and running, this paper constructs a database containing sixteen states by building a network with inertial sensors (IMU), curvature sensors (FLEX) and electromyography sensors (EMG). The experimental results show that the proposed system exhibits better results in complex action recognition and provides a new scheme for the realization of feature fusion and enhancement.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138581597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-23DOI: 10.1016/j.pmcj.2023.101860
Hai Truong , Dheryta Jaisinghani , Shubham Jain , Arunesh Sinha , JeongGil Ko , Rajesh Balan
Tracking in dense indoor environments where several thousands of people move around is an extremely challenging problem. In this paper, we present a system — DenseTrack for tracking people in such environments. DenseTrack leverages data from the sensing modalities that are already present in these environments — Wi-Fi (from enterprise network deployments) and Video (from surveillance cameras). We combine Wi-Fi information with video data to overcome the individual errors induced by these modalities. More precisely, the locations derived from video are used to overcome the localization errors inherent in using Wi-Fi signals where precise Wi-Fi MAC IDs are used to locate the same devices across different levels and locations inside a building. Typically, localization in dense environments is a computationally expensive process when done with just video data; hence hard to scale. DenseTrack combines Wi-Fi and video data to improve the accuracy of tracking people that are represented by video objects from non-overlapping video feeds. DenseTrack is a scalable and device-agnostic solution as it does not require any app installation on user smartphones or modifications to the Wi-Fi system. At the core of DenseTrack, is our algorithm — inCremental Association of Independent Variables under Uncertainty (CAIVU). CAIVU is inspired by the multi-armed bandits model and is designed to handle various complex features of practical real-world environments. CAIVU matches the devices reported by an off-the-shelf Wi-Fi system using connectivity information to specific video blobs obtained through a computationally efficient analysis of video data. By exploiting data from heterogeneous sources, DenseTrack offers an effective real-time solution for individual tracking in heavily populated indoor environments. We emphasize that no other previous system targeted nor was validated in such dense indoor environments. We tested DenseTrack extensively using both simulated data, as well as two real-world validations using data from an extremely dense convention center and a moderately dense university environment. Our simulation results show that DenseTrack achieves an average video-to-Wi-Fi matching accuracy of up to 90% in dense environments with a matching latency of 60 s on the simulator. When tested in a real-world extremely dense environment with over 500,000 people moving between different non-overlapping camera feeds, DenseTrack achieved an average match accuracy of 83% to within a 2-people distance with an average latency of 48 s.
在密集的室内环境中跟踪数千人的移动是一个极具挑战性的问题。在本文中,我们提出了一个用于在这种环境中跟踪人的系统- DenseTrack。DenseTrack利用了这些环境中已经存在的传感模式的数据——Wi-Fi(来自企业网络部署)和视频(来自监控摄像头)。我们将Wi-Fi信息与视频数据相结合,以克服这些模式引起的单个错误。更准确地说,从视频中获得的位置用于克服使用Wi-Fi信号固有的定位错误,其中使用精确的Wi-Fi MAC id来定位建筑物内不同楼层和位置的相同设备。通常情况下,在密集环境中,仅使用视频数据进行定位是一个计算成本很高的过程;因此很难扩大规模。DenseTrack结合了Wi-Fi和视频数据,以提高跟踪来自非重叠视频馈送的视频对象所代表的人的准确性。DenseTrack是一种可扩展且与设备无关的解决方案,因为它不需要在用户智能手机上安装任何应用程序或修改Wi-Fi系统。DenseTrack的核心是我们的算法——不确定性下自变量增量关联(CAIVU)。CAIVU的灵感来自多臂强盗模型,旨在处理实际世界环境的各种复杂特征。CAIVU与现成的Wi-Fi系统报告的设备相匹配,使用通过对视频数据进行高效计算分析获得的特定视频斑点的连接信息。通过利用来自不同来源的数据,DenseTrack为人口密集的室内环境中的个人跟踪提供了有效的实时解决方案。我们强调,以前没有其他系统针对如此密集的室内环境,也没有在这种环境中得到验证。我们使用模拟数据对DenseTrack进行了广泛的测试,并使用了来自密度极高的会议中心和中等密度的大学环境的两个真实验证数据。我们的仿真结果表明,DenseTrack在密集环境中实现了高达90%的平均视频到wi - fi匹配精度,模拟器上的匹配延迟为60秒。当在真实世界的极度密集环境中进行测试时,超过500,000人在不同的非重叠摄像机馈电之间移动,DenseTrack在2人距离内实现了83%的平均匹配精度,平均延迟为48秒。
{"title":"Tracking people across ultra populated indoor spaces by matching unreliable Wi-Fi signals with disconnected video feeds","authors":"Hai Truong , Dheryta Jaisinghani , Shubham Jain , Arunesh Sinha , JeongGil Ko , Rajesh Balan","doi":"10.1016/j.pmcj.2023.101860","DOIUrl":"https://doi.org/10.1016/j.pmcj.2023.101860","url":null,"abstract":"<div><p>Tracking in dense indoor environments where several thousands of people move around is an extremely challenging problem. In this paper, we present a system — <em>DenseTrack</em> for tracking people in such environments. <em>DenseTrack</em><span> leverages data from the sensing modalities that are already present in these environments — Wi-Fi (from enterprise network deployments) and Video (from surveillance cameras). We combine Wi-Fi information with video data to overcome the individual errors induced by these modalities. More precisely, the locations derived from video are used to overcome the localization errors<span> inherent in using Wi-Fi signals where precise Wi-Fi MAC IDs are used to locate the same devices across different levels and locations inside a building. Typically, localization<span> in dense environments is a computationally expensive process when done with just video data; hence hard to scale. </span></span></span><em>DenseTrack</em> combines Wi-Fi and video data to improve the accuracy of tracking people that are represented by video objects from non-overlapping video feeds. <em>DenseTrack</em><span> is a scalable and device-agnostic solution as it does not require any app installation on user smartphones or modifications to the Wi-Fi system. At the core of </span><em>DenseTrack</em>, is our algorithm — inCremental Association of Independent Variables under Uncertainty (CAIVU). CAIVU is inspired by the multi-armed bandits model and is designed to handle various complex features of practical real-world environments. CAIVU matches the devices reported by an off-the-shelf Wi-Fi system using connectivity information to specific video blobs obtained through a computationally efficient analysis of video data. By exploiting data from heterogeneous sources, <em>DenseTrack</em> offers an effective real-time solution for individual tracking in heavily populated indoor environments. We emphasize that no other previous system targeted nor was validated in such dense indoor environments. We tested <em>DenseTrack</em> extensively using both simulated data, as well as two real-world validations using data from an extremely dense convention center and a moderately dense university environment. Our simulation results show that <em>DenseTrack</em> achieves an average video-to-Wi-Fi matching accuracy of up to 90% in dense environments with a matching latency of 60 s on the simulator. When tested in a real-world extremely dense environment with over 500,000 people moving between different non-overlapping camera feeds, <em>DenseTrack</em><span> achieved an average match accuracy of 83% to within a 2-people distance with an average latency of 48 s.</span></p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138474608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Android smartphones have been widely adopted across the globe. They have the capability to access private and confidential information resulting in these devices being targeted by malware devisers. The dramatic escalation of assaults build an awareness to create a robust system that detects the occurrence of malicious actions in Android applications. The malware exposure study consists of static and dynamic analysis. This research work proposed a hybrid machine learning model based on static and dynamic analysis which offers efficient classification and detection of Android malware. The proposed novel malware classification technique can process any android application, then extracts its features, and predicts whether the applications under process is malware or benign. The proposed malware detection model can characterizes diverse malware types from Android platform with high positive rate. The proposed approach detects malicious applications in reduced execution time while also improving the security of Android as compared to existing approaches. State-of-the-art machine learning algorithms such as Support Vector Machine, k-Nearest Neighbor, Naïve Bayes, and different ensembles are employed on benign and malign applications to assess the execution of all classifiers on permissions, API calls and intents to identify malware. The proposed technique is evaluated on Drebin, MalGenome and Kaggle dataset, and outcomes indicate that this robust system improved runtime detection of malware with high speed and accuracy. Best accuracy of 100% is achieved on benchmark dataset when compared with state of the art techniques. Furthermore, the proposed approach outperforms state of the art techniques in terms of computational time, true positive rate, false positive rate, accuracy, precision, recall, and f-measure.
{"title":"Hybrid machine learning model for malware analysis in android apps","authors":"Saba Bashir , Farwa Maqbool , Farhan Hassan Khan , Asif Sohail Abid","doi":"10.1016/j.pmcj.2023.101859","DOIUrl":"10.1016/j.pmcj.2023.101859","url":null,"abstract":"<div><p><span>Android<span><span> smartphones have been widely adopted across the globe. They have the capability to access private and confidential information resulting in these devices being targeted by malware devisers. The dramatic escalation of assaults build an awareness to create a robust system that detects the occurrence of malicious actions in </span>Android applications. The malware exposure study consists of static and dynamic analysis. This research work proposed a hybrid </span></span>machine learning<span><span><span> model based on static and dynamic analysis which offers efficient classification and detection of Android malware. The proposed novel malware classification technique can process any android application, then extracts its features, and predicts whether the applications under process is malware or benign. The proposed malware detection model can characterizes diverse malware types from Android platform with high positive rate. The proposed approach detects </span>malicious applications<span><span> in reduced execution time while also improving the security of Android as compared to existing approaches. State-of-the-art machine learning algorithms such as </span>Support Vector Machine, k-Nearest Neighbor, Naïve Bayes, and different ensembles are employed on benign and malign applications to assess the execution of all classifiers on permissions, API calls and intents to identify malware. The proposed technique is evaluated on Drebin, MalGenome and Kaggle dataset, and outcomes indicate that this robust system improved runtime detection of malware with high speed and accuracy. Best accuracy of 100% is achieved on benchmark dataset when compared with </span></span>state of the art techniques. Furthermore, the proposed approach outperforms state of the art techniques in terms of computational time, true positive rate, false positive rate, accuracy, precision, recall, and f-measure.</span></p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135515228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.1016/j.pmcj.2023.101850
Long Sheng , Yue Chen , Shuli Ning , Shengpeng Wang , Bin Lian , Zhongcheng Wei
As the cornerstone of the development of emerging integrated sensing and communication, human activity recognition technology based on WiFi signals has been extensively studied. However, the existing activity sensing models will suffer serious performance degradation when applied to new scenarios due to the influence of environmental dynamics. To address this issue, we present an environment-independent activity recognition model named DA-HAR, which utilizes dual adversarial network. The framework exploits adversarial training among source domain classifiers and source–target domain discriminators to extract environment-independent activity features. To improve the performance of the model, a pseudo-label prediction based approach is introduced to assign labels to the target domain samples that closely resemble the source domain samples, thus mitigating the distribution deviation of activity features between source domain and target domain. Experimental results show that our proposed model has better cross-domain recognition performance compared to state-of-the-art recognition systems, especially when the distribution of activity features in the source domain and the target domain is significantly different, the accuracy is improved by 6.96% 11.22%.
{"title":"DA-HAR: Dual adversarial network for environment-independent WiFi human activity recognition","authors":"Long Sheng , Yue Chen , Shuli Ning , Shengpeng Wang , Bin Lian , Zhongcheng Wei","doi":"10.1016/j.pmcj.2023.101850","DOIUrl":"https://doi.org/10.1016/j.pmcj.2023.101850","url":null,"abstract":"<div><p><span><span>As the cornerstone of the development of emerging integrated sensing and communication, human activity recognition technology based on WiFi signals has been extensively studied. However, the existing activity sensing models will suffer serious </span>performance degradation<span><span> when applied to new scenarios due to the influence of environmental dynamics. To address this issue, we present an environment-independent activity recognition model named DA-HAR, which utilizes dual adversarial network. The framework exploits adversarial training among source domain classifiers and source–target domain </span>discriminators to extract environment-independent activity features. To improve the performance of the model, a pseudo-label prediction based approach is introduced to assign labels to the target domain samples that closely resemble the source domain samples, thus mitigating the distribution deviation of activity features between source domain and target domain. Experimental results show that our proposed model has better cross-domain recognition performance compared to state-of-the-art recognition systems, especially when the distribution of activity features in the source domain and the target domain is significantly different, the accuracy is improved by 6.96% </span></span><span><math><mo>∼</mo></math></span> 11.22%.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92014450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While many pervasive computing applications increasingly utilize real-time context extracted from a vision sensing infrastructure, the high energy overhead of DNN-based vision sensing pipelines remains a challenge for sustainable in-the-wild deployment. One common approach to reducing such energy overheads is the capture and transmission of lower-resolution images to an edge node (where the DNN inferencing task is executed), but this results in an accuracy-vs-energy tradeoff, as the DNN inference accuracy typically degrades with a drop in resolution. In this work, we introduce MRIM, a simple but effective framework to tackle this tradeoff. Under MRIM, the vision sensor platform first executes a lightweight preprocessing step to determine the saliency of different sub-regions within a single captured image frame, and then performs a saliency-aware non-uniform downscaling of individual sub-regions to produce a “mixed-resolution” image. We describe two novel low-complexity algorithms that the sensor platform can use to quickly compute suitable resolution choices for different regions under different energy/accuracy constraints. Experimental studies, involving object detection tasks evaluated traces from two benchmark urban monitoring datasets as well as a prototype Raspberry Pi-based MRIM implementation, demonstrate MRIM’s efficacy: even with an unoptimized embedded platform, MRIM can provide system energy conservation of (80% in high accuracy regimes) or increase task accuracy by , over conventional baselines of uniform resolution downscaling or image encoding, while supporting high throughput. On a low power ESP32 vision board, MRIM continues to provide 60+% energy savings over uniform downscaling while maintaining high detection accuracy. We further introduce an automated data-driven technique for determining a close-to-optimal number of MRIM sub-regions (for differential resolution adjustment), across different deployment conditions. We also show the generalized use of MRIM by considering an additional license plate recognition (LPR) task: while alternative approaches suffer 35%–40% loss in accuracy, MRIM suffers only a modest recognition loss of 10% even when the transmission data is reduced by over 50%.
{"title":"MRIM: Lightweight saliency-based mixed-resolution imaging for low-power pervasive vision","authors":"Ji-Yan Wu, Vithurson Subasharan, Tuan Tran, Kasun Gamlath, Archan Misra","doi":"10.1016/j.pmcj.2023.101858","DOIUrl":"https://doi.org/10.1016/j.pmcj.2023.101858","url":null,"abstract":"<div><p><span><span>While many pervasive computing applications increasingly utilize real-time context extracted from a vision sensing infrastructure, the high energy overhead of DNN-based vision sensing pipelines remains a challenge for sustainable in-the-wild deployment. One common approach to reducing such energy overheads is the capture and transmission of lower-resolution images to an edge node (where the </span>DNN inferencing task is executed), but this results in an accuracy-vs-energy tradeoff, as the DNN inference accuracy typically degrades with a drop in resolution. In this work, we introduce </span><em>MRIM</em>, a simple but effective framework to tackle this tradeoff. Under <em>MRIM</em><span>, the vision sensor platform first executes a lightweight preprocessing step to determine the saliency of different sub-regions within a single captured image frame, and then performs a saliency-aware non-uniform downscaling of individual sub-regions to produce a “mixed-resolution” image. We describe two novel low-complexity algorithms that the sensor platform can use to quickly compute suitable resolution choices for different regions under different energy/accuracy constraints. Experimental studies, involving object detection tasks evaluated traces from two benchmark urban monitoring datasets as well as a prototype Raspberry Pi-based </span><em>MRIM</em> implementation, demonstrate <em>MRIM’s</em> efficacy: even with an unoptimized embedded platform, <em>MRIM</em><span> can provide system energy conservation of </span><span><math><mrow><mn>35</mn><mo>+</mo><mtext>%</mtext></mrow></math></span> (<span><math><mo>∼</mo></math></span>80% in high accuracy regimes) or increase task accuracy by <span><math><mrow><mn>8</mn><mo>+</mo><mtext>%</mtext></mrow></math></span><span>, over conventional baselines of uniform resolution downscaling or image encoding, while supporting high throughput. On a low power ESP32 vision board, </span><em>MRIM</em><span> continues to provide 60+% energy savings over uniform downscaling while maintaining high detection accuracy. We further introduce an automated data-driven technique for determining a close-to-optimal number of </span><em>MRIM</em> sub-regions (for differential resolution adjustment), across different deployment conditions. We also show the generalized use of <em>MRIM</em><span> by considering an additional license plate recognition (LPR) task: while alternative approaches suffer 35%–40% loss in accuracy, </span><em>MRIM</em> suffers only a modest recognition loss of <span><math><mo>∼</mo></math></span>10% even when the transmission data is reduced by over 50%.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92108576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-20DOI: 10.1016/j.pmcj.2023.101849
Borja Molina-Coronado , Usue Mori , Alexander Mendiburu , Jose Miguel-Alonso
The rapidly evolving nature of Android apps poses a significant challenge to static batch machine learning algorithms employed in malware detection systems, as they quickly become obsolete. Despite this challenge, the existing literature pays limited attention to addressing this issue, with many advanced Android malware detection approaches, such as Drebin, DroidDet and MaMaDroid, relying on static models. In this work, we show how retraining techniques are able to maintain detector capabilities over time. Particularly, we analyze the effect of two aspects in the efficiency and performance of the detectors: (1) the frequency with which the models are retrained, and (2) the data used for retraining. In the first experiment, we compare periodic retraining with a more advanced concept drift detection method that triggers retraining only when necessary. In the second experiment, we analyze sampling methods to reduce the amount of data used to retrain models. Specifically, we compare fixed sized windows of recent data and state-of-the-art active learning methods that select those apps that help keep the training dataset small but diverse. Our experiments show that concept drift detection and sample selection mechanisms result in very efficient retraining strategies which can be successfully used to maintain the performance of the static Android malware state-of-the-art detectors in changing environments.
{"title":"Efficient concept drift handling for batch android malware detection models","authors":"Borja Molina-Coronado , Usue Mori , Alexander Mendiburu , Jose Miguel-Alonso","doi":"10.1016/j.pmcj.2023.101849","DOIUrl":"https://doi.org/10.1016/j.pmcj.2023.101849","url":null,"abstract":"<div><p>The rapidly evolving nature of Android apps poses a significant challenge to static batch machine learning algorithms employed in malware detection systems, as they quickly become obsolete. Despite this challenge, the existing literature pays limited attention to addressing this issue, with many advanced Android malware detection approaches, such as Drebin, DroidDet and MaMaDroid, relying on static models. In this work, we show how retraining techniques are able to maintain detector capabilities over time. Particularly, we analyze the effect of two aspects in the efficiency and performance of the detectors: (1) the frequency with which the models are retrained, and (2) the data used for retraining. In the first experiment, we compare periodic retraining with a more advanced concept drift detection method that triggers retraining only when necessary. In the second experiment, we analyze sampling methods to reduce the amount of data used to retrain models. Specifically, we compare fixed sized windows of recent data and state-of-the-art active learning methods that select those apps that help keep the training dataset small but diverse. Our experiments show that concept drift detection and sample selection mechanisms result in very efficient retraining strategies which can be successfully used to maintain the performance of the static Android malware state-of-the-art detectors in changing environments.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49767195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1016/j.pmcj.2023.101848
Emanuele Lattanzi, Lorenzo Calisti, Paolo Capellacci
Wearable devices have become increasingly popular in recent years, and they offer a great opportunity for sensor-based continuous human activity recognition in real-world scenarios. However, one of the major challenges is their limited battery life. In this study, we propose an energy-aware human activity recognition framework for wearable devices based on a lightweight accurate trigger. The trigger acts as a binary classifier capable of recognizing, with maximum accuracy, the presence or absence of one of the interesting activities in the real-time input signal and it is responsible for starting the energy-intensive classification procedure only when needed. The measurement results conducted on a real wearable device show that the proposed approach can reduce energy consumption by up to 95% in realistic case studies, with a cost of performance deterioration of at most 1% or 2% compared to the traditional energy-intensive classification strategy.
{"title":"Lightweight accurate trigger to reduce power consumption in sensor-based continuous human activity recognition","authors":"Emanuele Lattanzi, Lorenzo Calisti, Paolo Capellacci","doi":"10.1016/j.pmcj.2023.101848","DOIUrl":"https://doi.org/10.1016/j.pmcj.2023.101848","url":null,"abstract":"<div><p>Wearable devices have become increasingly popular in recent years, and they offer a great opportunity for sensor-based continuous human activity recognition in real-world scenarios. However, one of the major challenges is their limited battery life. In this study, we propose an energy-aware human activity recognition framework for wearable devices based on a lightweight accurate trigger. The trigger acts as a binary classifier capable of recognizing, with maximum accuracy, the presence or absence of one of the interesting activities in the real-time input signal and it is responsible for starting the energy-intensive classification procedure only when needed. The measurement results conducted on a real wearable device show that the proposed approach can reduce energy consumption by up to 95% in realistic case studies, with a cost of performance deterioration of at most 1% or 2% compared to the traditional energy-intensive classification strategy.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49764728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1016/j.pmcj.2023.101842
Serafino Cicerone , Alessia Di Fonso , Gabriele Di Stefano , Alfredo Navarra
The Mutual Visibility is a well-known problem in the context of mobile robots. For a set of robots disposed in the Euclidean plane, it asks for moving the robots without collisions so as to achieve a placement ensuring that no three robots are collinear. For robots moving on graphs, we consider the Geodesic Mutual Visibility