Pub Date : 2024-11-28DOI: 10.1016/j.pmcj.2024.101997
Giacomo Longo , Alessandro Cantelli-Forti , Enrico Russo , Francesco Lupia , Martin Strohmeier , Andrea Pugliese
Accurately determining the number of people affected by emergencies is essential for deploying effective response measures during disasters. Traditional solutions like cellular and Wi-Fi networks are often rendered ineffective during such emergencies due to widespread infrastructure damage or non-functional connectivity, prompting the exploration of more resilient methods. This paper proposes a novel solution utilizing Bluetooth Low Energy (BLE) technology and decentralized networks composed entirely of mobile and wearable devices to count individuals autonomously without reliance on external communication equipment or specialized personnel. This count leverages uncoordinated relayed communication among devices within these networks, enabling us to extend our counting capabilities well beyond the direct range of rescuers. A formally evaluated, experimentally validated, and privacy-preserving counting algorithm that demonstrates rapid convergence and high accuracy even in large-scale scenarios is employed.
{"title":"Collective victim counting in post-disaster response: A distributed, power-efficient algorithm via BLE spontaneous networks","authors":"Giacomo Longo , Alessandro Cantelli-Forti , Enrico Russo , Francesco Lupia , Martin Strohmeier , Andrea Pugliese","doi":"10.1016/j.pmcj.2024.101997","DOIUrl":"10.1016/j.pmcj.2024.101997","url":null,"abstract":"<div><div>Accurately determining the number of people affected by emergencies is essential for deploying effective response measures during disasters. Traditional solutions like cellular and Wi-Fi networks are often rendered ineffective during such emergencies due to widespread infrastructure damage or non-functional connectivity, prompting the exploration of more resilient methods. This paper proposes a novel solution utilizing Bluetooth Low Energy (BLE) technology and decentralized networks composed entirely of mobile and wearable devices to count individuals autonomously without reliance on external communication equipment or specialized personnel. This count leverages uncoordinated relayed communication among devices within these networks, enabling us to extend our counting capabilities well beyond the direct range of rescuers. A formally evaluated, experimentally validated, and privacy-preserving counting algorithm that demonstrates rapid convergence and high accuracy even in large-scale scenarios is employed.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"106 ","pages":"Article 101997"},"PeriodicalIF":3.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142757430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-23DOI: 10.1016/j.pmcj.2024.101998
Ahmed Fahim Mostafa , Mohamed Abdel-Kader , Yasser Gadallah
Data collection techniques can be used to determine the coverage conditions of a cellular communication network within a given area. In such tasks, the data acquisition process faces significant challenges for larger or inaccessible locations. Such challenges can be alleviated through the use of unmanned aerial vehicles (UAVs). This way, data acquisition obstacles can be overcome to acquire and process the necessary data points with relative ease to estimate a full area coverage map for the concerned network. In this study, we formulate the problem of deploying a UAV to acquire the minimum possible measurement data points in a geographical region for the purpose of constructing a full communication coverage gap map for this region. We then devise an estimation model that utilizes the measured data samples and determines the noise/loss levels of the communication links at the other unvisited spots of the region accordingly. The proposed estimation model is based on a cascade-forward neural network to allow for both nonlinear and direct linear relationships between the input data and the output estimations. We further investigate the conventional method of using linear regression estimators to decide on the received power levels at the different locations of the examined area. Our simulation evaluations show that the proposed nonlinear estimator outperforms the conventional linear regression technique in terms of the communication coverage error level while using the minimum possible collected data points. These minimum data points are then used in constructing a complete coverage gap map visualization that demonstrates the overall network service conditions within the surveyed region.
{"title":"Three-dimensional spectrum coverage gap map construction in cellular networks: A non-linear estimation approach","authors":"Ahmed Fahim Mostafa , Mohamed Abdel-Kader , Yasser Gadallah","doi":"10.1016/j.pmcj.2024.101998","DOIUrl":"10.1016/j.pmcj.2024.101998","url":null,"abstract":"<div><div>Data collection techniques can be used to determine the coverage conditions of a cellular communication network within a given area. In such tasks, the data acquisition process faces significant challenges for larger or inaccessible locations. Such challenges can be alleviated through the use of unmanned aerial vehicles (UAVs). This way, data acquisition obstacles can be overcome to acquire and process the necessary data points with relative ease to estimate a full area coverage map for the concerned network. In this study, we formulate the problem of deploying a UAV to acquire the minimum possible measurement data points in a geographical region for the purpose of constructing a full communication coverage gap map for this region. We then devise an estimation model that utilizes the measured data samples and determines the noise/loss levels of the communication links at the other unvisited spots of the region accordingly. The proposed estimation model is based on a cascade-forward neural network to allow for both nonlinear and direct linear relationships between the input data and the output estimations. We further investigate the conventional method of using linear regression estimators to decide on the received power levels at the different locations of the examined area. Our simulation evaluations show that the proposed nonlinear estimator outperforms the conventional linear regression technique in terms of the communication coverage error level while using the minimum possible collected data points. These minimum data points are then used in constructing a complete coverage gap map visualization that demonstrates the overall network service conditions within the surveyed region.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"106 ","pages":"Article 101998"},"PeriodicalIF":3.0,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142758973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cooperative spectrum sensing (CSS) in cognitive radio networks (CRNs) enhances spectral decision-making precision but introduces vulnerabilities to malicious secondary user (SU) attacks. This paper proposes a decentralized trust and reputation management (TRM) framework to address these vulnerabilities, emphasizing the need to mitigate risks associated with centralized systems. Inspired by blockchain technology, we present a distributed TRM method for CSS in CRNs, significantly reducing the impact of malicious attacks. Our approach leverages a Proof of Trust (PoT) system to enhance the integrity of CSS, thereby improving the accuracy of spectral decision-making while reducing false positives and false negatives. In this system, SUs’ trust scores are dynamically updated based on their sensing reports, and they will collaboratively participate in new blocks' formation using the trust scores. Simulation results validate the effectiveness of the proposed method, indicating its potential to enhance security and reliability in CRNs.
{"title":"Blockchain-Inspired Trust Management in Cognitive Radio Networks with Cooperative Spectrum Sensing","authors":"Mahsa Mahvash , Neda Moghim , Mojtaba Mahdavi , Mahdieh Amiri , Sachin Shetty","doi":"10.1016/j.pmcj.2024.101999","DOIUrl":"10.1016/j.pmcj.2024.101999","url":null,"abstract":"<div><div>Cooperative spectrum sensing (CSS) in cognitive radio networks (CRNs) enhances spectral decision-making precision but introduces vulnerabilities to malicious secondary user (SU) attacks. This paper proposes a decentralized trust and reputation management (TRM) framework to address these vulnerabilities, emphasizing the need to mitigate risks associated with centralized systems. Inspired by blockchain technology, we present a distributed TRM method for CSS in CRNs, significantly reducing the impact of malicious attacks. Our approach leverages a Proof of Trust (PoT) system to enhance the integrity of CSS, thereby improving the accuracy of spectral decision-making while reducing false positives and false negatives. In this system, SUs’ trust scores are dynamically updated based on their sensing reports, and they will collaboratively participate in new blocks' formation using the trust scores. Simulation results validate the effectiveness of the proposed method, indicating its potential to enhance security and reliability in CRNs.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"106 ","pages":"Article 101999"},"PeriodicalIF":3.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Edge Cloud Computing (MECC), as a promising partial computing offloading solution, has provided new possibilities for compute-intensive and delay-sensitive mobile applications, which can simultaneously leverage edge computing and cloud services. However, designing resource allocation strategies for MECC faces an extremely challenging problem of simultaneously satisfying the end-to-end latency requirements and minimum resource allocation of multiple mobile applications. To address this issue, we comprehensively consider the randomness of computing request arrivals, service time, and dynamic computing resources. We model the MECC network as a two-level tandem queue consisting of two sequential computing processing queues, each with multiple servers. We apply a deep reinforcement learning algorithm called Deep Deterministic Policy Gradient (DDPG) to learn the computing speed adjustment strategy for the tandem queue. This strategy ensures the end-to-end latency requirements of multiple mobile applications while preventing overuse of the total computing resources of edge servers and cloud servers. Numerous simulation experiments demonstrate that our approach is significantly superior to other methods in dynamic network environments.
{"title":"Delay-aware resource allocation for partial computation offloading in mobile edge cloud computing","authors":"Lingfei Yu , Hongliu Xu , Yunhao Zeng , Jiali Deng","doi":"10.1016/j.pmcj.2024.101996","DOIUrl":"10.1016/j.pmcj.2024.101996","url":null,"abstract":"<div><div>Mobile Edge Cloud Computing (MECC), as a promising partial computing offloading solution, has provided new possibilities for compute-intensive and delay-sensitive mobile applications, which can simultaneously leverage edge computing and cloud services. However, designing resource allocation strategies for MECC faces an extremely challenging problem of simultaneously satisfying the end-to-end latency requirements and minimum resource allocation of multiple mobile applications. To address this issue, we comprehensively consider the randomness of computing request arrivals, service time, and dynamic computing resources. We model the MECC network as a two-level tandem queue consisting of two sequential computing processing queues, each with multiple servers. We apply a deep reinforcement learning algorithm called Deep Deterministic Policy Gradient (DDPG) to learn the computing speed adjustment strategy for the tandem queue. This strategy ensures the end-to-end latency requirements of multiple mobile applications while preventing overuse of the total computing resources of edge servers and cloud servers. Numerous simulation experiments demonstrate that our approach is significantly superior to other methods in dynamic network environments.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"105 ","pages":"Article 101996"},"PeriodicalIF":3.0,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142661544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1016/j.pmcj.2024.101994
Arshad Sher , Otar Akanyeti
Human gait is a key biomarker for health, independence and quality of life. Advances in wearable inertial sensor technologies have paved the way for out-of-the-lab human gait analysis, which is important for the assessment of mobility and balance in natural environments and has applications in multiple fields from healthcare to urban planning. Automatic recognition of the environment where walking takes place is a prerequisite for successful characterisation of terrain-induced gait alterations. A key question which remains unexplored in the field is how minimum data requirements for high terrain classification accuracy change depending on the sensor placement on the body. To address this question, we evaluate the changes in performance of five canonical machine learning classifiers by varying several data sampling parameters including sampling rate, segment length, and sensor configuration. Our analysis on two independent datasets clearly demonstrate that a single inertial measurement unit is sufficient to recognise terrain-induced gait alterations, accuracy and minimum data requirements vary with the device position on the body, and choosing correct data sampling parameters for each position can improve classification accuracy up to 40% or reduce data size by 16 times. Our findings highlight the need for adaptive data collection and processing algorithms for resource-efficient computing on mobile devices.
{"title":"Minimum data sampling requirements for accurate detection of terrain-induced gait alterations change with mobile sensor position","authors":"Arshad Sher , Otar Akanyeti","doi":"10.1016/j.pmcj.2024.101994","DOIUrl":"10.1016/j.pmcj.2024.101994","url":null,"abstract":"<div><div>Human gait is a key biomarker for health, independence and quality of life. Advances in wearable inertial sensor technologies have paved the way for out-of-the-lab human gait analysis, which is important for the assessment of mobility and balance in natural environments and has applications in multiple fields from healthcare to urban planning. Automatic recognition of the environment where walking takes place is a prerequisite for successful characterisation of terrain-induced gait alterations. A key question which remains unexplored in the field is how minimum data requirements for high terrain classification accuracy change depending on the sensor placement on the body. To address this question, we evaluate the changes in performance of five canonical machine learning classifiers by varying several data sampling parameters including sampling rate, segment length, and sensor configuration. Our analysis on two independent datasets clearly demonstrate that a single inertial measurement unit is sufficient to recognise terrain-induced gait alterations, accuracy and minimum data requirements vary with the device position on the body, and choosing correct data sampling parameters for each position can improve classification accuracy up to 40% or reduce data size by 16 times. Our findings highlight the need for adaptive data collection and processing algorithms for resource-efficient computing on mobile devices.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"105 ","pages":"Article 101994"},"PeriodicalIF":3.0,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142531089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1016/j.pmcj.2024.101995
Tingxuan Fu , Sijia Hao , Qiming Chen , Zihan Yan , Huawei Liu , Amin Rezaeipanah
The rapid advancement of technology has led to the proliferation of devices connected to the Internet of Things (IoT) networks, bringing forth challenges in both energy management and secure data communication. In addition to energy constraints, IoT networks face threats from malicious nodes, which jeopardize the security of communications. To address these challenges, we propose an Energy-aware secure Routing scheme via Two-Way Trust evaluation (ERTWT) for IoT networks. This scheme enhances network protection against various attacks by calculating trust values based on energy trust, direct trust, and indirect trust. The scheme aims to enhance the efficiency of data transmission by dynamically selecting routes based on both energy availability and trustworthiness metrics of fog nodes. Since trust management can guarantee privacy and security, ERTWT allows the service requester and the service provider to check each other's safety and reliability at the same time. In addition, we implement Generative Flow Networks (GFlowNets) to predict the energy levels available in nodes in order to use them optimally. The proposed scheme has been compared with several advanced energy-aware and trust-based routing protocols. Evaluation results show that ERTWT more effectively detects malicious nodes while achieving better energy efficiency and data transmission rates.
{"title":"An energy-aware secure routing scheme in internet of things networks via two-way trust evaluation","authors":"Tingxuan Fu , Sijia Hao , Qiming Chen , Zihan Yan , Huawei Liu , Amin Rezaeipanah","doi":"10.1016/j.pmcj.2024.101995","DOIUrl":"10.1016/j.pmcj.2024.101995","url":null,"abstract":"<div><div>The rapid advancement of technology has led to the proliferation of devices connected to the Internet of Things (IoT) networks, bringing forth challenges in both energy management and secure data communication. In addition to energy constraints, IoT networks face threats from malicious nodes, which jeopardize the security of communications. To address these challenges, we propose an Energy-aware secure Routing scheme via Two-Way Trust evaluation (ERTWT) for IoT networks. This scheme enhances network protection against various attacks by calculating trust values based on energy trust, direct trust, and indirect trust. The scheme aims to enhance the efficiency of data transmission by dynamically selecting routes based on both energy availability and trustworthiness metrics of fog nodes. Since trust management can guarantee privacy and security, ERTWT allows the service requester and the service provider to check each other's safety and reliability at the same time. In addition, we implement Generative Flow Networks (GFlowNets) to predict the energy levels available in nodes in order to use them optimally. The proposed scheme has been compared with several advanced energy-aware and trust-based routing protocols. Evaluation results show that ERTWT more effectively detects malicious nodes while achieving better energy efficiency and data transmission rates.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"105 ","pages":"Article 101995"},"PeriodicalIF":3.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142531088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.pmcj.2024.101993
Youjia Han, Huibin Wang, Yueheng Li, Lili Zhang
Many trust-based models for wireless sensor networks do not account for trust attacks, which are destructive phenomena that undermine the security and reliability of these models. Therefore, a trust-based fast security model fused with an improved density peaks clustering algorithm (TFSM-DPC) is proposed to quickly identify trust attacks in this paper. First, when calculating direct trust values, TFSM-DPC designs the adaptive penalty factors based on the state of received and sent packets and behaviors, and introduces the volatilization factors to reduce the effect of historical trust values. Second, TFSM-DPC improved density peaks clustering (DPC) algorithm to evaluate the trustworthiness of each recommendation value, thus filtering malicious recommendations before calculating the indirect trust values. Moreover, to filter two types of recommendations, the improved DPC algorithm incorporates artificial benchmark data along with trust values recommended by neighbors as input data. Finally, based on the relationship between direct trust and indirect trust, a secure formula for calculate the comprehensive trust is designed. Therefore, the proposed TFSM-DPC can improve the accuracy of trust evaluation and speed up the identification of malicious nodes. Simulation results show that TFSM-DPC can effectively identify on-off, bad-mouth and collusion attacks, and improve the speed of excluding malicious nodes from the network, compared to other trust-based algorithms.
{"title":"Trust-aware and improved density peaks clustering algorithm for fast and secure models in wireless sensor networks","authors":"Youjia Han, Huibin Wang, Yueheng Li, Lili Zhang","doi":"10.1016/j.pmcj.2024.101993","DOIUrl":"10.1016/j.pmcj.2024.101993","url":null,"abstract":"<div><div>Many trust-based models for wireless sensor networks do not account for trust attacks, which are destructive phenomena that undermine the security and reliability of these models. Therefore, a trust-based fast security model fused with an improved density peaks clustering algorithm (TFSM-DPC) is proposed to quickly identify trust attacks in this paper. First, when calculating direct trust values, TFSM-DPC designs the adaptive penalty factors based on the state of received and sent packets and behaviors, and introduces the volatilization factors to reduce the effect of historical trust values. Second, TFSM-DPC improved density peaks clustering (DPC) algorithm to evaluate the trustworthiness of each recommendation value, thus filtering malicious recommendations before calculating the indirect trust values. Moreover, to filter two types of recommendations, the improved DPC algorithm incorporates artificial benchmark data along with trust values recommended by neighbors as input data. Finally, based on the relationship between direct trust and indirect trust, a secure formula for calculate the comprehensive trust is designed. Therefore, the proposed TFSM-DPC can improve the accuracy of trust evaluation and speed up the identification of malicious nodes. Simulation results show that TFSM-DPC can effectively identify on-off, bad-mouth and collusion attacks, and improve the speed of excluding malicious nodes from the network, compared to other trust-based algorithms.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"105 ","pages":"Article 101993"},"PeriodicalIF":3.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142531086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-26DOI: 10.1016/j.pmcj.2024.101992
Zahra Aghaee , Afsaneh Fatemi , Peyman Arebi
In recent years, one type of complex network called the Social Internet of Things (SIoT) has attracted the attention of researchers. Controllability is one of the important problems in complex networks and it has essential applications in social, biological, and technical networks. Applying this problem can also play an important role in the control of social smart cities, but it has not yet been defined as a specific problem on SIoT, and no solution has been provided for it. This paper addresses the controllability problem of the temporal SIoT network. In this regard, first, a definition for the temporal SIoT network is provided. Then, the unique relationships of this network are defined and modeled formally. In the following, the Controllability problem is applied to the temporal SIoT network (CSIoT) to identify the Minimum Driver nodes Set (MDS). Then proposed CSIoT is compared with the state-of-the-art methods for performance analysis. In the obtained results, the heterogeneity (different types, brands, and models) has been investigated. Also, 69.80 % of the SIoT sub-graphs nodes have been identified as critical driver nodes in 152 different sets. The proposed controllability deals with network control in a distributed manner.
{"title":"A controllability method on the social Internet of Things (SIoT) network","authors":"Zahra Aghaee , Afsaneh Fatemi , Peyman Arebi","doi":"10.1016/j.pmcj.2024.101992","DOIUrl":"10.1016/j.pmcj.2024.101992","url":null,"abstract":"<div><div>In recent years, one type of complex network called the Social Internet of Things (SIoT) has attracted the attention of researchers. Controllability is one of the important problems in complex networks and it has essential applications in social, biological, and technical networks. Applying this problem can also play an important role in the control of social smart cities, but it has not yet been defined as a specific problem on SIoT, and no solution has been provided for it. This paper addresses the controllability problem of the temporal SIoT network. In this regard, first, a definition for the temporal SIoT network is provided. Then, the unique relationships of this network are defined and modeled formally. In the following, the Controllability problem is applied to the temporal SIoT network (CSIoT) to identify the Minimum Driver nodes Set (MDS). Then proposed CSIoT is compared with the state-of-the-art methods for performance analysis. In the obtained results, the heterogeneity (different types, brands, and models) has been investigated. Also, 69.80 % of the SIoT sub-graphs nodes have been identified as critical driver nodes in 152 different sets. The proposed controllability deals with network control in a distributed manner.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"105 ","pages":"Article 101992"},"PeriodicalIF":3.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-23DOI: 10.1016/j.pmcj.2024.101991
JiaYi Feng, Lang Li, LiuYan Yan, ChuTian Deng
The Internet of Things (IoT) has emerged as a pivotal force in the global technological revolution and industrial transformation. Despite its advancements, IoT devices continue to face significant security challenges, particularly during data transmission, and are often constrained by limited battery life and energy resources. To address these challenges, a low energy lightweight block cipher (INLEC) is proposed to mitigate data leakage in IoT devices. In addition, the Structure and Components INvolution (SCIN) design is introduced. It is constructed using two similar round functions to achieve front–back symmetry. This design ensures coherence throughout the INLEC encryption and decryption processes and addresses the increased resource consumption during the decryption phase in Substitution Permutation Networks (SPN). Furthermore, a low area S-box is generated through a hardware gate-level circuit search method combined with Genetic Programming (GP). This optimization leads to a 47.02% reduction in area compared to the of Midori, using UMC technology. Moreover, a chaotic function is used to generate the optimal nibble-based involutive permutation, further enhancing its efficiency. In terms of performs, the energy consumption for both encryption and decryption with INLEC is 6.88 J/bit, representing 25.21% reduction compared to Midori. Finally, INLEC is implemented using STM32L475 PanDuoLa and Nexys A7 FPGA development boards, establishing an encryption platform for IoT devices. This platform provides functions for data acquisition, transmission, and encryption.
{"title":"INLEC: An involutive and low energy lightweight block cipher for internet of things","authors":"JiaYi Feng, Lang Li, LiuYan Yan, ChuTian Deng","doi":"10.1016/j.pmcj.2024.101991","DOIUrl":"10.1016/j.pmcj.2024.101991","url":null,"abstract":"<div><div>The Internet of Things (IoT) has emerged as a pivotal force in the global technological revolution and industrial transformation. Despite its advancements, IoT devices continue to face significant security challenges, particularly during data transmission, and are often constrained by limited battery life and energy resources. To address these challenges, a low energy lightweight block cipher (INLEC) is proposed to mitigate data leakage in IoT devices. In addition, the Structure and Components INvolution (SCIN) design is introduced. It is constructed using two similar round functions to achieve front–back symmetry. This design ensures coherence throughout the INLEC encryption and decryption processes and addresses the increased resource consumption during the decryption phase in Substitution Permutation Networks (SPN). Furthermore, a low area S-box is generated through a hardware gate-level circuit search method combined with Genetic Programming (GP). This optimization leads to a 47.02% reduction in area compared to the <span><math><msub><mrow><mi>S</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> of Midori, using UMC <span><math><mrow><mn>0</mn><mo>.</mo><mn>18</mn><mspace></mspace><mi>μ</mi><mi>m</mi></mrow></math></span> technology. Moreover, a chaotic function is used to generate the optimal nibble-based involutive permutation, further enhancing its efficiency. In terms of performs, the energy consumption for both encryption and decryption with INLEC is 6.88 <span><math><mi>μ</mi></math></span>J/bit, representing 25.21% reduction compared to Midori. Finally, INLEC is implemented using STM32L475 PanDuoLa and Nexys A7 FPGA development boards, establishing an encryption platform for IoT devices. This platform provides functions for data acquisition, transmission, and encryption.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"105 ","pages":"Article 101991"},"PeriodicalIF":3.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142318676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-20DOI: 10.1016/j.pmcj.2024.101979
Yi Ke, Quan Wan, Fangting Xie, Zhen Liang, Ziyu Wu, Xiaohui Cai
In-bed pose estimation holds significant potential in various domains, including healthcare, sleep studies, and smart homes. Pressure-sensitive bed sheets have emerged as a promising solution for addressing this task considering the advantages of convenience, comfort, and privacy protection. However, existing studies primarily rely on ideal datasets that do not consider the presence of common daily objects such as pillows and quilts referred to as interference, which can significantly impact the pressure distribution. As a result, there is still a gap between the models trained with ideal data and the real-life application. Besides the end-to-end training approach, one potential solution is to recognize the interference and fuse the interference information to the model during training. In this study, we created a well-annotated dataset, consisting of eight in-bed scenes and four common types of interference: pillows, quilts, a laptop, and a package. To facilitate the analysis, the pixels in the pressure image were categorized into five classes based on the relative position between the interference and the human. We then evaluated the performance of five neural network models for pixel-level interference recognition. The best-performing model achieved an accuracy of 80.0% in recognizing the five categories. Subsequently, we validated the utility of interference recognition in improving pose estimation accuracy. The ideal model initially shows an average joint position error of up to 30.59 cm and a Percentage of Correct Keypoints (PCK) of 0.332 on data from scenes with interferences. After retraining on data including interference, the error is reduced to 13.54 cm and the PCK increases to 0.747. By integrating interference recognition information, either by excluding the parts of the interference or using the recognition results as input, the error can be further minimized to 12.44 cm and the PCK can be maximized up to 0.777. Our findings represent an initial step towards the practical deployment of pressure-sensitive bed sheets in everyday life.
{"title":"Pressure distribution based 2D in-bed keypoint prediction under interfered scenes","authors":"Yi Ke, Quan Wan, Fangting Xie, Zhen Liang, Ziyu Wu, Xiaohui Cai","doi":"10.1016/j.pmcj.2024.101979","DOIUrl":"10.1016/j.pmcj.2024.101979","url":null,"abstract":"<div><div>In-bed pose estimation holds significant potential in various domains, including healthcare, sleep studies, and smart homes. Pressure-sensitive bed sheets have emerged as a promising solution for addressing this task considering the advantages of convenience, comfort, and privacy protection. However, existing studies primarily rely on ideal datasets that do not consider the presence of common daily objects such as pillows and quilts referred to as interference, which can significantly impact the pressure distribution. As a result, there is still a gap between the models trained with ideal data and the real-life application. Besides the end-to-end training approach, one potential solution is to recognize the interference and fuse the interference information to the model during training. In this study, we created a well-annotated dataset, consisting of eight in-bed scenes and four common types of interference: pillows, quilts, a laptop, and a package. To facilitate the analysis, the pixels in the pressure image were categorized into five classes based on the relative position between the interference and the human. We then evaluated the performance of five neural network models for pixel-level interference recognition. The best-performing model achieved an accuracy of 80.0% in recognizing the five categories. Subsequently, we validated the utility of interference recognition in improving pose estimation accuracy. The ideal model initially shows an average joint position error of up to 30.59 cm and a Percentage of Correct Keypoints (PCK) of 0.332 on data from scenes with interferences. After retraining on data including interference, the error is reduced to 13.54 cm and the PCK increases to 0.747. By integrating interference recognition information, either by excluding the parts of the interference or using the recognition results as input, the error can be further minimized to 12.44 cm and the PCK can be maximized up to 0.777. Our findings represent an initial step towards the practical deployment of pressure-sensitive bed sheets in everyday life.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"105 ","pages":"Article 101979"},"PeriodicalIF":3.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}