Mobile crowdsensing (MCS) systems rely on the collective contribution of sensor data from numerous mobile devices carried by participants. However, the open and participatory nature of MCS renders these systems vulnerable to adversarial attacks or data poisoning attempts where threat actors can inject malicious data into the system. There is a need for a detection system that mitigates malicious sensor data to maintain the integrity and reliability of the collected information. This paper addresses this issue by proposing an adaptive and robust model for detecting malicious data in MCS scenarios involving sensor data from mobile devices. The proposed model incorporates an adaptive learning mechanism that enables the TCN-based model to continually evolve and adapt to new patterns, enhancing its capability to detect novel malicious data as threats evolve. We also present a comprehensive evaluation of the proposed model’s performance using the SherLock datasets, demonstrating its effectiveness in accurately detecting malicious sensor data and mitigating potential threats to the integrity of MCS systems. Comparative analysis with existing models highlights the performance of the proposed TCN-based model in terms of detection accuracy, with an accuracy score of 98%. Through these contributions, the paper aims to advance the state of the art in ensuring the trustworthiness and security of MCS systems, paving the way for the development of more reliable and robust crowdsensing applications.
{"title":"An Adaptive Temporal Convolutional Network Autoencoder for Malicious Data Detection in Mobile Crowd Sensing","authors":"N. Owoh, Jackie Riley, Moses Ashawa, Salaheddin Hosseinzadeh, Anand Philip, Jude Osamor","doi":"10.3390/s24072353","DOIUrl":"https://doi.org/10.3390/s24072353","url":null,"abstract":"Mobile crowdsensing (MCS) systems rely on the collective contribution of sensor data from numerous mobile devices carried by participants. However, the open and participatory nature of MCS renders these systems vulnerable to adversarial attacks or data poisoning attempts where threat actors can inject malicious data into the system. There is a need for a detection system that mitigates malicious sensor data to maintain the integrity and reliability of the collected information. This paper addresses this issue by proposing an adaptive and robust model for detecting malicious data in MCS scenarios involving sensor data from mobile devices. The proposed model incorporates an adaptive learning mechanism that enables the TCN-based model to continually evolve and adapt to new patterns, enhancing its capability to detect novel malicious data as threats evolve. We also present a comprehensive evaluation of the proposed model’s performance using the SherLock datasets, demonstrating its effectiveness in accurately detecting malicious sensor data and mitigating potential threats to the integrity of MCS systems. Comparative analysis with existing models highlights the performance of the proposed TCN-based model in terms of detection accuracy, with an accuracy score of 98%. Through these contributions, the paper aims to advance the state of the art in ensuring the trustworthiness and security of MCS systems, paving the way for the development of more reliable and robust crowdsensing applications.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"191 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140769105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reza Jamali, Andrea Generosi, José Yuri Villafan, Maura Mengoni, Leonardo Pelagalli, G. Battista, M. Martarelli, P. Chiariotti, S. A. Mansi, M. Arnesano, Paolo Castellini
The perception of sound greatly impacts users’ emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors’ responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors’ emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors’ questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the ‘distracted’ state and 62 percent in the ‘heavy-eyed’ state. On the other hand, regression analysis shows that the correlation between jurors’ valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants’ reactions to auditory stimuli.
{"title":"Facial Expression Recognition for Measuring Jurors’ Attention in Acoustic Jury Tests","authors":"Reza Jamali, Andrea Generosi, José Yuri Villafan, Maura Mengoni, Leonardo Pelagalli, G. Battista, M. Martarelli, P. Chiariotti, S. A. Mansi, M. Arnesano, Paolo Castellini","doi":"10.3390/s24072298","DOIUrl":"https://doi.org/10.3390/s24072298","url":null,"abstract":"The perception of sound greatly impacts users’ emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors’ responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors’ emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors’ questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the ‘distracted’ state and 62 percent in the ‘heavy-eyed’ state. On the other hand, regression analysis shows that the correlation between jurors’ valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants’ reactions to auditory stimuli.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"62 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140795146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is crucial to demonstrate a robust correlation between the simulated and manufactured parallel-transmit (pTx) arrays performances to release the currently-used, very restrictive safety margins. In this study, we describe the qualitative and quantitative validation of a simulation model with respect to experimental results for an 8-channel dipole array at 7T. An approach that includes the radiofrequency losses into the simulation model is presented and compared to simulation models neglecting these losses. Simulated S-matrices and individual B1+-field maps were compared with experimentally measured quantities. With the proposed approach, an average relative difference of ~1.1% was found between simulated and experimental reflection coefficients, ~4.2% for the 1st coupling terms, and ~9.4% for the 2nd coupling terms. A maximum normalized root-mean-square error of 4.8% was achieved between experimental and simulated individual B1+-field maps. The effectiveness of the simulation model to accurately predict the B1+-field patterns was assessed, qualitatively and quantitatively, through a comparison with experimental data. We conclude that, using the proposed model for radiofrequency losses, a robust correlation is achieved between simulated and experimental data using the 8-channel dipole array at 7T.
{"title":"Simulation Validation of an 8-Channel Parallel-Transmit Dipole Array on an Infant Phantom: Including RF Losses for Robust Correlation with Experimental Results","authors":"Jérémie Clément, Ö. Ipek","doi":"10.3390/s24072254","DOIUrl":"https://doi.org/10.3390/s24072254","url":null,"abstract":"It is crucial to demonstrate a robust correlation between the simulated and manufactured parallel-transmit (pTx) arrays performances to release the currently-used, very restrictive safety margins. In this study, we describe the qualitative and quantitative validation of a simulation model with respect to experimental results for an 8-channel dipole array at 7T. An approach that includes the radiofrequency losses into the simulation model is presented and compared to simulation models neglecting these losses. Simulated S-matrices and individual B1+-field maps were compared with experimentally measured quantities. With the proposed approach, an average relative difference of ~1.1% was found between simulated and experimental reflection coefficients, ~4.2% for the 1st coupling terms, and ~9.4% for the 2nd coupling terms. A maximum normalized root-mean-square error of 4.8% was achieved between experimental and simulated individual B1+-field maps. The effectiveness of the simulation model to accurately predict the B1+-field patterns was assessed, qualitatively and quantitatively, through a comparison with experimental data. We conclude that, using the proposed model for radiofrequency losses, a robust correlation is achieved between simulated and experimental data using the 8-channel dipole array at 7T.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"654 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140776869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm.
{"title":"Blockchain Based Decentralized and Proactive Caching Strategy in Mobile Edge Computing Environment","authors":"Jingpan Bai, Silei Zhu, Houling Ji","doi":"10.3390/s24072279","DOIUrl":"https://doi.org/10.3390/s24072279","url":null,"abstract":"In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"50 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140782085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Silvestri, Giuseppe Tricomi, S. Bassolillo, Riccardo De Benedictis, Mario Ciampi
This paper describes a novel architecture that aims to create a template for the implementation of an IT platform, supporting the deployment and integration of the different digital twin subsystems that compose a complex urban intelligence system. In more detail, the proposed Smart City IT architecture has the following main purposes: (i) facilitating the deployment of the subsystems in a cloud environment; (ii) effectively storing, integrating, managing, and sharing the huge amount of heterogeneous data acquired and produced by each subsystem, using a data lake; (iii) supporting data exchange and sharing; (iv) managing and executing workflows, to automatically coordinate and run processes; and (v) to provide and visualize the required information. A prototype of the proposed IT solution was implemented leveraging open-source frameworks and technologies, to test its functionalities and performance. The results of the tests performed in real-world settings confirmed that the proposed architecture could efficiently and easily support the deployment and integration of heterogeneous subsystems, allowing them to share and integrate their data and to select, extract, and visualize the information required by a user, as well as promoting the integration with other external systems, and defining and executing workflows to orchestrate the various subsystems involved in complex analyses and processes.
本文介绍了一种新颖的架构,旨在创建一个实施信息技术平台的模板,支持构成复杂城市智能系统的不同数字孪生子系统的部署和集成。更详细地说,拟议的智能城市信息技术架构有以下主要目的:(i) 促进在云环境中部署子系统;(ii) 利用数据湖有效存储、整合、管理和共享每个子系统获取和产生的大量异构数据;(iii) 支持数据交换和共享;(iv) 管理和执行工作流,自动协调和运行流程;(v) 提供和可视化所需信息。利用开源框架和技术实施了拟议 IT 解决方案的原型,以测试其功能和性能。在实际环境中进行的测试结果证实,拟议的架构能够高效、轻松地支持异构子系统的部署和集成,使它们能够共享和集成数据,并选择、提取和可视化用户所需的信息,同时促进与其他外部系统的集成,并定义和执行工作流,以协调复杂分析和流程中涉及的各个子系统。
{"title":"An Urban Intelligence Architecture for Heterogeneous Data and Application Integration, Deployment and Orchestration","authors":"Stefano Silvestri, Giuseppe Tricomi, S. Bassolillo, Riccardo De Benedictis, Mario Ciampi","doi":"10.3390/s24072376","DOIUrl":"https://doi.org/10.3390/s24072376","url":null,"abstract":"This paper describes a novel architecture that aims to create a template for the implementation of an IT platform, supporting the deployment and integration of the different digital twin subsystems that compose a complex urban intelligence system. In more detail, the proposed Smart City IT architecture has the following main purposes: (i) facilitating the deployment of the subsystems in a cloud environment; (ii) effectively storing, integrating, managing, and sharing the huge amount of heterogeneous data acquired and produced by each subsystem, using a data lake; (iii) supporting data exchange and sharing; (iv) managing and executing workflows, to automatically coordinate and run processes; and (v) to provide and visualize the required information. A prototype of the proposed IT solution was implemented leveraging open-source frameworks and technologies, to test its functionalities and performance. The results of the tests performed in real-world settings confirmed that the proposed architecture could efficiently and easily support the deployment and integration of heterogeneous subsystems, allowing them to share and integrate their data and to select, extract, and visualize the information required by a user, as well as promoting the integration with other external systems, and defining and executing workflows to orchestrate the various subsystems involved in complex analyses and processes.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"36 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140796882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Bretreger, Gregory R. Hancock, John Lowry, I. Senanayake, In-Young Yeo
Wildfires are pivotal to the functioning of many ecosystems globally, including the magnitude of surface erosion rates. This study aims to investigate the relationships between surface erosion rates and wildfire intensity in the tropical north savanna of Australia. The occurrence of fires in western Arnhem Land, Northern Territory, Australia was determined with remotely sensed digital datasets as well as analogue erosion measurement methods. Analysis was performed using satellite imagery to quantify burn severity via a monthly delta normalised burn ratio (dNBR). This was compared and correlated against on-ground erosion measurements (erosion pins) for 13 years. The dNBR for each year (up to +0.4) displayed no relationship with subsequent erosion (up to ±4 mm of erosion/deposition per year). Poor correlation was attributed to low fire severity, patchy burning, significant time between fires and erosion-inducing rainfall. Other influences included surface roughness from disturbances from feral pigs and cyclone impacts. The findings here oppose many other studies that have found that fires increase surface erosion. This accentuates the unique ecosystem characteristics and fire regime properties found in the tropical Northern Territory. Scenarios of late dry season fires with high severity were not observed in this study and require more investigations. Ecosystems such as the one examined here require specialised management practices acknowledging the specific ecosystem functions and processes. The methods employed here combine both analogue and digital sensors to improve understandings of a unique environmental system.
{"title":"The Impacts of Burn Severity and Frequency on Erosion in Western Arnhem Land, Australia","authors":"D. Bretreger, Gregory R. Hancock, John Lowry, I. Senanayake, In-Young Yeo","doi":"10.3390/s24072282","DOIUrl":"https://doi.org/10.3390/s24072282","url":null,"abstract":"Wildfires are pivotal to the functioning of many ecosystems globally, including the magnitude of surface erosion rates. This study aims to investigate the relationships between surface erosion rates and wildfire intensity in the tropical north savanna of Australia. The occurrence of fires in western Arnhem Land, Northern Territory, Australia was determined with remotely sensed digital datasets as well as analogue erosion measurement methods. Analysis was performed using satellite imagery to quantify burn severity via a monthly delta normalised burn ratio (dNBR). This was compared and correlated against on-ground erosion measurements (erosion pins) for 13 years. The dNBR for each year (up to +0.4) displayed no relationship with subsequent erosion (up to ±4 mm of erosion/deposition per year). Poor correlation was attributed to low fire severity, patchy burning, significant time between fires and erosion-inducing rainfall. Other influences included surface roughness from disturbances from feral pigs and cyclone impacts. The findings here oppose many other studies that have found that fires increase surface erosion. This accentuates the unique ecosystem characteristics and fire regime properties found in the tropical Northern Territory. Scenarios of late dry season fires with high severity were not observed in this study and require more investigations. Ecosystems such as the one examined here require specialised management practices acknowledging the specific ecosystem functions and processes. The methods employed here combine both analogue and digital sensors to improve understandings of a unique environmental system.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"670 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140757918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Cibira, Ivan Glesk, Jozef Dubovan, Daniel Benedikovič
Many techniques have been studied for recovering information from shared media such as optical fiber that carries different types of communication, sensing, and data streaming. This article focuses on a simple method for retrieving the targeted information with the least necessary number of significant samples when using statistical population sampling. Here, the focus is on the statistical denoising and detection of the fiber Bragg grating (FBG) power spectra. The impact of the two-sided and one-sided sliding window technique is investigated. The size of the window is varied up to one-half of the symmetrical FBG power spectra bandwidth. Both, two- and one-sided small population sampling techniques were experimentally investigated. We found that the shorter sliding window delivered less processing latency, which would benefit real-time applications. The calculated detection thresholds were used for in-depth analysis of the data we obtained. It was found that the normality three-sigma rule does not need to be followed when a small population sampling is used. Experimental demonstrations and analyses also showed that novel denoising and statistical threshold detection do not depend on prior knowledge of the probability distribution functions that describe the FBG power spectra peaks and background noise. We have demonstrated that the detection thresholds’ adaptability strongly depends on the mean and standard deviation values of the small population sampling.
{"title":"Impact of Reducing Statistically Small Population Sampling on Threshold Detection in FBG Optical Sensing","authors":"G. Cibira, Ivan Glesk, Jozef Dubovan, Daniel Benedikovič","doi":"10.3390/s24072285","DOIUrl":"https://doi.org/10.3390/s24072285","url":null,"abstract":"Many techniques have been studied for recovering information from shared media such as optical fiber that carries different types of communication, sensing, and data streaming. This article focuses on a simple method for retrieving the targeted information with the least necessary number of significant samples when using statistical population sampling. Here, the focus is on the statistical denoising and detection of the fiber Bragg grating (FBG) power spectra. The impact of the two-sided and one-sided sliding window technique is investigated. The size of the window is varied up to one-half of the symmetrical FBG power spectra bandwidth. Both, two- and one-sided small population sampling techniques were experimentally investigated. We found that the shorter sliding window delivered less processing latency, which would benefit real-time applications. The calculated detection thresholds were used for in-depth analysis of the data we obtained. It was found that the normality three-sigma rule does not need to be followed when a small population sampling is used. Experimental demonstrations and analyses also showed that novel denoising and statistical threshold detection do not depend on prior knowledge of the probability distribution functions that describe the FBG power spectra peaks and background noise. We have demonstrated that the detection thresholds’ adaptability strongly depends on the mean and standard deviation values of the small population sampling.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140764110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid growth of the Internet of Things (IoT), massive terminal devices are connected to the network, generating a large amount of IoT data. The reliable sharing of IoT data is crucial for fields such as smart home and healthcare, as it promotes the intelligence of the IoT and provides faster problem solutions. Traditional data sharing schemes usually rely on a trusted centralized server to achieve each attempted access from users to data, which faces serious challenges of a single point of failure, low reliability, and an opaque access process in current IoT environments. To address these disadvantages, we propose a secure and dynamic access control scheme for the IoT, named SDACS, which enables data owners to achieve decentralized and fine-grained access control in an auditable and reliable way. For access control, attribute-based control (ABAC), Hyperledger Fabric, and interplanetary file system (IPFS) were used, with four kinds of access control contracts deployed on blockchain to coordinate and implement access policies. Additionally, a lightweight, certificateless authentication protocol was proposed to minimize the disclosure of identity information and ensure the double-layer protection of data through secure off-chain identity authentication and message transmission. The experimental and theoretical analysis demonstrated that our scheme can maintain high throughput while achieving high security and stability in IoT data security sharing scenarios.
{"title":"SDACS: Blockchain-Based Secure and Dynamic Access Control Scheme for Internet of Things","authors":"Q. Gong, Jinnan Zhang, Zheng Wei, Xinmin Wang, Xia Zhang, Xin Yan, Yang Liu, Liming Dong","doi":"10.3390/s24072267","DOIUrl":"https://doi.org/10.3390/s24072267","url":null,"abstract":"With the rapid growth of the Internet of Things (IoT), massive terminal devices are connected to the network, generating a large amount of IoT data. The reliable sharing of IoT data is crucial for fields such as smart home and healthcare, as it promotes the intelligence of the IoT and provides faster problem solutions. Traditional data sharing schemes usually rely on a trusted centralized server to achieve each attempted access from users to data, which faces serious challenges of a single point of failure, low reliability, and an opaque access process in current IoT environments. To address these disadvantages, we propose a secure and dynamic access control scheme for the IoT, named SDACS, which enables data owners to achieve decentralized and fine-grained access control in an auditable and reliable way. For access control, attribute-based control (ABAC), Hyperledger Fabric, and interplanetary file system (IPFS) were used, with four kinds of access control contracts deployed on blockchain to coordinate and implement access policies. Additionally, a lightweight, certificateless authentication protocol was proposed to minimize the disclosure of identity information and ensure the double-layer protection of data through secure off-chain identity authentication and message transmission. The experimental and theoretical analysis demonstrated that our scheme can maintain high throughput while achieving high security and stability in IoT data security sharing scenarios.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"144 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140781554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier Núñez, Arjen Boersma, R. Koldeweij, Joseph Trimboli
Occupational exposure to airborne dust is responsible for numerous respiratory and cardiovascular diseases. Because of these hazards, air samples are regularly collected on filters and sent for laboratory analysis to ensure compliance with regulations. Unfortunately, this approach often takes weeks to provide a result, which makes it impossible to identify dust sources or protect workers in real time. To address these challenges, we developed a system that characterizes airborne dust by its spectro-chemical profile. In this device, a micro-cyclone concentrates particles from the air and introduces them into a hollow waveguide where an infrared signature is obtained. An algorithm is then used to quantitate the composition of respirable particles by incorporating the infrared features of the most relevant chemical groups and compensating for Mie scattering. With this approach, the system can successfully differentiate mixtures of inorganic materials associated with construction sites in near-real time. The use of a free-space optic assembly improves the light throughput significantly, which enables detection limits of approximately 10 µg/m3 with a 10 minute sampling time. While respirable crystalline silica was the focus of this work, it is hoped that the flexibility of the platform will enable different aerosols to be detected in other occupational settings.
{"title":"A Portable Infrared System for Identification of Particulate Matter","authors":"Javier Núñez, Arjen Boersma, R. Koldeweij, Joseph Trimboli","doi":"10.3390/s24072288","DOIUrl":"https://doi.org/10.3390/s24072288","url":null,"abstract":"Occupational exposure to airborne dust is responsible for numerous respiratory and cardiovascular diseases. Because of these hazards, air samples are regularly collected on filters and sent for laboratory analysis to ensure compliance with regulations. Unfortunately, this approach often takes weeks to provide a result, which makes it impossible to identify dust sources or protect workers in real time. To address these challenges, we developed a system that characterizes airborne dust by its spectro-chemical profile. In this device, a micro-cyclone concentrates particles from the air and introduces them into a hollow waveguide where an infrared signature is obtained. An algorithm is then used to quantitate the composition of respirable particles by incorporating the infrared features of the most relevant chemical groups and compensating for Mie scattering. With this approach, the system can successfully differentiate mixtures of inorganic materials associated with construction sites in near-real time. The use of a free-space optic assembly improves the light throughput significantly, which enables detection limits of approximately 10 µg/m3 with a 10 minute sampling time. While respirable crystalline silica was the focus of this work, it is hoped that the flexibility of the platform will enable different aerosols to be detected in other occupational settings.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"607 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140779998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of three-dimensional (3D) point cloud reconstruction is affected by dynamic features such as vegetation. Vegetation can be detected by near-infrared (NIR)-based indices; however, the sensors providing multispectral data are resource intensive. To address this issue, this study proposes a two-stage framework to firstly improve the performance of the 3D point cloud generation of buildings with a two-view SfM algorithm, and secondly, reduce noise caused by vegetation. The proposed framework can also overcome the lack of near-infrared data when identifying vegetation areas for reducing interferences in the SfM process. The first stage includes cross-sensor training, model selection and the evaluation of image-to-image RGB to color infrared (CIR) translation with Generative Adversarial Networks (GANs). The second stage includes feature detection with multiple feature detector operators, feature removal with respect to the NDVI-based vegetation classification, masking, matching, pose estimation and triangulation to generate sparse 3D point clouds. The materials utilized in both stages are a publicly available RGB-NIR dataset, and satellite and UAV imagery. The experimental results indicate that the cross-sensor and category-wise validation achieves an accuracy of 0.9466 and 0.9024, with a kappa coefficient of 0.8932 and 0.9110, respectively. The histogram-based evaluation demonstrates that the predicted NIR band is consistent with the original NIR data of the satellite test dataset. Finally, the test on the UAV RGB and artificially generated NIR with a segmentation-driven two-view SfM proves that the proposed framework can effectively translate RGB to CIR for NDVI calculation. Further, the artificially generated NDVI is able to segment and classify vegetation. As a result, the generated point cloud is less noisy, and the 3D model is enhanced.
{"title":"Enhancing Building Point Cloud Reconstruction from RGB UAV Data with Machine-Learning-Based Image Translation","authors":"E. J. Dippold, Fuan Tsai","doi":"10.3390/s24072358","DOIUrl":"https://doi.org/10.3390/s24072358","url":null,"abstract":"The performance of three-dimensional (3D) point cloud reconstruction is affected by dynamic features such as vegetation. Vegetation can be detected by near-infrared (NIR)-based indices; however, the sensors providing multispectral data are resource intensive. To address this issue, this study proposes a two-stage framework to firstly improve the performance of the 3D point cloud generation of buildings with a two-view SfM algorithm, and secondly, reduce noise caused by vegetation. The proposed framework can also overcome the lack of near-infrared data when identifying vegetation areas for reducing interferences in the SfM process. The first stage includes cross-sensor training, model selection and the evaluation of image-to-image RGB to color infrared (CIR) translation with Generative Adversarial Networks (GANs). The second stage includes feature detection with multiple feature detector operators, feature removal with respect to the NDVI-based vegetation classification, masking, matching, pose estimation and triangulation to generate sparse 3D point clouds. The materials utilized in both stages are a publicly available RGB-NIR dataset, and satellite and UAV imagery. The experimental results indicate that the cross-sensor and category-wise validation achieves an accuracy of 0.9466 and 0.9024, with a kappa coefficient of 0.8932 and 0.9110, respectively. The histogram-based evaluation demonstrates that the predicted NIR band is consistent with the original NIR data of the satellite test dataset. Finally, the test on the UAV RGB and artificially generated NIR with a segmentation-driven two-view SfM proves that the proposed framework can effectively translate RGB to CIR for NDVI calculation. Further, the artificially generated NDVI is able to segment and classify vegetation. As a result, the generated point cloud is less noisy, and the 3D model is enhanced.","PeriodicalId":221960,"journal":{"name":"Sensors (Basel, Switzerland)","volume":"251 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140781510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}