The use of magnetometers arranged in a gradiometer configuration offers a practical and widely used solution, particularly in archaeological applications where the sources of interest are generally shallow. Since magnetic anomalies due to archaeological remains often have low amplitudes, highly sensitive magnetic sensors are kept very close to the ground to reveal buried structures. However, the deployment of Unmanned Aerial Vehicles (UAVs) is increasingly becoming a reliable and valuable tool for the acquisition of magnetic data, providing uniform coverage of large areas and access to even very steep terrain, saving time and reducing risks. However, the application of a vertical gradiometer for drone-borne measurements is still challenging due to the instability of the system drone magnetometer in flight and noise issues due to the magnetic interference of the mobile platform or related to the oscillation of the suspended sensors. We present the implementation of a magnetic vertical gradiometer UAV system and its use in an archaeological area of Southern Italy. To reduce the magnetic and electromagnetic noise caused by the aircraft, the magnetometer was suspended 3m below the drone using ropes. A Continuous Wavelet Transform analysis of data collected in controlled tests confirmed that several characteristic power spectrum peaks occur at frequencies compatible with the magnetometer oscillations. This noise was then eliminated with a properly designed low-pass filter. The resulting drone-borne vertical gradient data compare very well with ground-based magnetic measurements collected in the same area and taken as a control dataset.
{"title":"Drone-Borne Magnetic Gradiometry in Archaeological Applications","authors":"Filippo Accomando, Giovanni Florio","doi":"10.3390/s24134270","DOIUrl":"https://doi.org/10.3390/s24134270","url":null,"abstract":"The use of magnetometers arranged in a gradiometer configuration offers a practical and widely used solution, particularly in archaeological applications where the sources of interest are generally shallow. Since magnetic anomalies due to archaeological remains often have low amplitudes, highly sensitive magnetic sensors are kept very close to the ground to reveal buried structures. However, the deployment of Unmanned Aerial Vehicles (UAVs) is increasingly becoming a reliable and valuable tool for the acquisition of magnetic data, providing uniform coverage of large areas and access to even very steep terrain, saving time and reducing risks. However, the application of a vertical gradiometer for drone-borne measurements is still challenging due to the instability of the system drone magnetometer in flight and noise issues due to the magnetic interference of the mobile platform or related to the oscillation of the suspended sensors. We present the implementation of a magnetic vertical gradiometer UAV system and its use in an archaeological area of Southern Italy. To reduce the magnetic and electromagnetic noise caused by the aircraft, the magnetometer was suspended 3m below the drone using ropes. A Continuous Wavelet Transform analysis of data collected in controlled tests confirmed that several characteristic power spectrum peaks occur at frequencies compatible with the magnetometer oscillations. This noise was then eliminated with a properly designed low-pass filter. The resulting drone-borne vertical gradient data compare very well with ground-based magnetic measurements collected in the same area and taken as a control dataset.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Appropriate traffic cooperation at intersections plays a crucial part in modern intelligent transportation systems. To enhance traffic efficiency at intersections, this paper establishes a cooperative motion optimization strategy that adjusts the trajectories of autonomous vehicles (AVs) based on risk degree. Initially, AVs are presumed to select any exit lanes, thereby optimizing spatial resources. Trajectories are generated for each possible lane. Subsequently, a motion optimization algorithm predicated on risk degree is introduced, which takes into account the trajectories and motion states of AVs. The risk degree serves to prevent collisions between conflicting AVs. A cooperative motion optimization strategy is then formulated, incorporating car-following behavior, traffic signals, and conflict resolution as constraints. Specifically, the movement of all vehicles at the intersection is modified to achieve safer and more efficient traffic flow. The strategy is validated through a simulation using SUMO. The results indicate a 20.51% and 11.59% improvement in traffic efficiency in two typical scenarios when compared to a First-Come-First-Serve approach. Moreover, numerical experiments reveal significant enhancements in the stability of optimized AV acceleration.
在现代智能交通系统中,交叉路口适当的交通合作起着至关重要的作用。为了提高交叉路口的交通效率,本文建立了一种根据风险程度调整自动驾驶车辆(AV)轨迹的合作运动优化策略。起初,自动驾驶汽车会选择任何出口车道,从而优化空间资源。为每条可能的车道生成轨迹。随后,引入基于风险程度的运动优化算法,该算法考虑到了 AV 的运动轨迹和运动状态。风险度的作用是防止相互冲突的自动驾驶汽车之间发生碰撞。然后制定了一种合作运动优化策略,将汽车跟随行为、交通信号和冲突解决作为约束条件。具体来说,就是修改交叉路口所有车辆的运动方式,以实现更安全、更高效的交通流。该策略通过使用 SUMO 进行模拟验证。结果表明,与 "先到先得 "方法相比,在两种典型情况下,交通效率分别提高了 20.51% 和 11.59%。此外,数值实验显示,优化后的视听加速稳定性显著增强。
{"title":"Cooperative Motion Optimization Based on Risk Degree under Automatic Driving Environment","authors":"Miaomiao Liu, Mingyue Zhu, Minkun Yao, Pengrui Li, Renjing Tang, Hui Deng","doi":"10.3390/s24134275","DOIUrl":"https://doi.org/10.3390/s24134275","url":null,"abstract":"Appropriate traffic cooperation at intersections plays a crucial part in modern intelligent transportation systems. To enhance traffic efficiency at intersections, this paper establishes a cooperative motion optimization strategy that adjusts the trajectories of autonomous vehicles (AVs) based on risk degree. Initially, AVs are presumed to select any exit lanes, thereby optimizing spatial resources. Trajectories are generated for each possible lane. Subsequently, a motion optimization algorithm predicated on risk degree is introduced, which takes into account the trajectories and motion states of AVs. The risk degree serves to prevent collisions between conflicting AVs. A cooperative motion optimization strategy is then formulated, incorporating car-following behavior, traffic signals, and conflict resolution as constraints. Specifically, the movement of all vehicles at the intersection is modified to achieve safer and more efficient traffic flow. The strategy is validated through a simulation using SUMO. The results indicate a 20.51% and 11.59% improvement in traffic efficiency in two typical scenarios when compared to a First-Come-First-Serve approach. Moreover, numerical experiments reveal significant enhancements in the stability of optimized AV acceleration.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By 2030, it is expected that a trillion things will be connected. In such a scenario, the power required for the trillion nodes would necessitate using trillions of batteries, resulting in maintenance challenges and significant management costs. The objective of this research is to contribute to sustainable wireless sensor nodes through the introduction of an energy-autonomous wireless sensor node (EAWSN) designed to be an energy-autonomous, self-sufficient, and maintenance-free device, to be suitable for long-term mass-scale internet of things (IoT) applications in remote and inaccessible environments. The EAWSN utilizes Low-Power Wide Area Networks (LPWANs) via LoRaWAN connectivity, and it is powered by a commercial photovoltaic cell, which can also harvest ambient light in an indoor environment. Storage components include a capacitor of 2 mF, which allows EAWSN to successfully transmit 30-byte data packets up to 560 m, thanks to opportunistic LoRaWAN data rate selection that enables a significant trade-off between energy consumption and network coverage. The reliability of the designed platform is demonstrated through validation in an urban environment, showing exceptional performance over remarkable distances.
{"title":"Towards Mass-Scale IoT with Energy-Autonomous LoRaWAN Sensor Nodes","authors":"Roberto La Rosa, Lokman Boulebnane, Antonino Pagano, Fabrizio Giuliano, Daniele Croce","doi":"10.3390/s24134279","DOIUrl":"https://doi.org/10.3390/s24134279","url":null,"abstract":"By 2030, it is expected that a trillion things will be connected. In such a scenario, the power required for the trillion nodes would necessitate using trillions of batteries, resulting in maintenance challenges and significant management costs. The objective of this research is to contribute to sustainable wireless sensor nodes through the introduction of an energy-autonomous wireless sensor node (EAWSN) designed to be an energy-autonomous, self-sufficient, and maintenance-free device, to be suitable for long-term mass-scale internet of things (IoT) applications in remote and inaccessible environments. The EAWSN utilizes Low-Power Wide Area Networks (LPWANs) via LoRaWAN connectivity, and it is powered by a commercial photovoltaic cell, which can also harvest ambient light in an indoor environment. Storage components include a capacitor of 2 mF, which allows EAWSN to successfully transmit 30-byte data packets up to 560 m, thanks to opportunistic LoRaWAN data rate selection that enables a significant trade-off between energy consumption and network coverage. The reliability of the designed platform is demonstrated through validation in an urban environment, showing exceptional performance over remarkable distances.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human–object interaction (HOI) detection identifies a “set of interactions” in an image involving the recognition of interacting instances and the classification of interaction categories. The complexity and variety of image content make this task challenging. Recently, the Transformer has been applied in computer vision and received attention in the HOI detection task. Therefore, this paper proposes a novel Part Refinement Tandem Transformer (PRTT) for HOI detection. Unlike the previous Transformer-based HOI method, PRTT utilizes multiple decoders to split and process rich elements of HOI prediction and introduces a new part state feature extraction (PSFE) module to help improve the final interaction category classification. We adopt a novel prior feature integrated cross-attention (PFIC) to utilize the fine-grained partial state semantic and appearance feature output obtained by the PSFE module to guide queries. We validate our method on two public datasets, V-COCO and HICO-DET. Compared to state-of-the-art models, the performance of detecting human–object interaction is significantly improved by the PRTT.
{"title":"A Novel Part Refinement Tandem Transformer for Human–Object Interaction Detection","authors":"Zhan Su, Hongzhe Yang","doi":"10.3390/s24134278","DOIUrl":"https://doi.org/10.3390/s24134278","url":null,"abstract":"Human–object interaction (HOI) detection identifies a “set of interactions” in an image involving the recognition of interacting instances and the classification of interaction categories. The complexity and variety of image content make this task challenging. Recently, the Transformer has been applied in computer vision and received attention in the HOI detection task. Therefore, this paper proposes a novel Part Refinement Tandem Transformer (PRTT) for HOI detection. Unlike the previous Transformer-based HOI method, PRTT utilizes multiple decoders to split and process rich elements of HOI prediction and introduces a new part state feature extraction (PSFE) module to help improve the final interaction category classification. We adopt a novel prior feature integrated cross-attention (PFIC) to utilize the fine-grained partial state semantic and appearance feature output obtained by the PSFE module to guide queries. We validate our method on two public datasets, V-COCO and HICO-DET. Compared to state-of-the-art models, the performance of detecting human–object interaction is significantly improved by the PRTT.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seunghyun Hwang, Changhyun Jun, Carlo De Michele, Hyeon-Joon Kim, Jinwook Lee
This paper proposes a novel method to estimate rainfall intensity by analyzing the sound of raindrops. An innovative device for collecting acoustic data was designed, capable of blocking ambient noise in rainy environments. The device was deployed in real rainfall conditions during both the monsoon season and non-monsoon season to record raindrop sounds. The collected raindrop sounds were divided into 1 s, 10 s, and 1 min intervals, and the performance of rainfall intensity estimation for each segment length was compared. First, the rainfall occurrence was determined based on four extracted frequency domain features (average of dB, frequency-weighted average of dB, standard deviation of dB, and highest frequency), followed by a quantitative estimation of the rainfall intensity for the periods in which rainfall occurred. The results indicated that the best estimation performance was achieved when using 10 s segments, corresponding to the following metrics: accuracy: 0.909, false alarm ratio: 0.099, critical success index: 0.753, precision: 0.901, recall: 0.821, and F1 score: 0.859 for rainfall occurrence classification; and root mean square error: 1.675 mm/h, R2: 0.798, and mean absolute error: 0.493 mm/h for quantitative rainfall intensity estimation. The proposed small and lightweight device is convenient to install and manage and is remarkably cost-effective compared with traditional rainfall observation equipment. Additionally, this compact rainfall acoustic collection device can facilitate the collection of detailed rainfall information over vast areas.
{"title":"Rainfall Observation Leveraging Raindrop Sounds Acquired Using Waterproof Enclosure: Exploring Optimal Length of Sounds for Frequency Analysis","authors":"Seunghyun Hwang, Changhyun Jun, Carlo De Michele, Hyeon-Joon Kim, Jinwook Lee","doi":"10.3390/s24134281","DOIUrl":"https://doi.org/10.3390/s24134281","url":null,"abstract":"This paper proposes a novel method to estimate rainfall intensity by analyzing the sound of raindrops. An innovative device for collecting acoustic data was designed, capable of blocking ambient noise in rainy environments. The device was deployed in real rainfall conditions during both the monsoon season and non-monsoon season to record raindrop sounds. The collected raindrop sounds were divided into 1 s, 10 s, and 1 min intervals, and the performance of rainfall intensity estimation for each segment length was compared. First, the rainfall occurrence was determined based on four extracted frequency domain features (average of dB, frequency-weighted average of dB, standard deviation of dB, and highest frequency), followed by a quantitative estimation of the rainfall intensity for the periods in which rainfall occurred. The results indicated that the best estimation performance was achieved when using 10 s segments, corresponding to the following metrics: accuracy: 0.909, false alarm ratio: 0.099, critical success index: 0.753, precision: 0.901, recall: 0.821, and F1 score: 0.859 for rainfall occurrence classification; and root mean square error: 1.675 mm/h, R2: 0.798, and mean absolute error: 0.493 mm/h for quantitative rainfall intensity estimation. The proposed small and lightweight device is convenient to install and manage and is remarkably cost-effective compared with traditional rainfall observation equipment. Additionally, this compact rainfall acoustic collection device can facilitate the collection of detailed rainfall information over vast areas.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudiu-Ionel Nicola, Marcel Nicola, Dumitru Sacerdoțianu, Ion Pătru
Based on the need for real-time sag monitoring of Overhead Power Lines (OPL) for electricity transmission, this article presents the implementation of a hardware and software system for online monitoring of OPL cables. The mathematical model based on differential equations and the methods of algorithmic calculation of OPL cable sag are presented. Considering that, based on the mathematical model presented, the calculation of cable sag can be done in different ways depending on the sensors used, and the presented application uses a variety of sensors. Therefore, a direct calculation is made using one of the different methods. Subsequently, the verification relations are highlighted directly, and in return, the calculation by the alternative method, which uses another group of sensors, generates both a verification of the calculation and the functionality of the sensors, thus obtaining a defect observer of the sensors. The hardware architecture of the OPL cable online monitoring application is presented, together with the main characteristics of the sensors and communication equipment used. The configurations required to transmit data using the ModBUS and ZigBee protocols are also presented. The main software modules of the OPL cable condition monitoring application are described, which ensure the monitoring of the main parameters of the power line and the visualisation of the results both on the electricity provider’s intranet using a web server and MySQL database, and on the Internet using an Internet of Things (IoT) server. This categorisation of the data visualisation mode is done in such a way as to ensure a high level of cyber security. Also, the global accuracy of the entire OPL cable sag calculus system is estimated at 0.1%. Starting from the mathematical model of the OPL cable sag calculation, it goes through the stages of creating such a monitoring system, from the numerical simulations carried out using Matlab to the real-time implementation of this monitoring application using Laboratory Virtual Instrument Engineering Workbench (LabVIEW).
{"title":"Real-Time Monitoring of Cable Sag and Overhead Power Line Parameters Based on a Distributed Sensor Network and Implementation in a Web Server and IoT","authors":"Claudiu-Ionel Nicola, Marcel Nicola, Dumitru Sacerdoțianu, Ion Pătru","doi":"10.3390/s24134283","DOIUrl":"https://doi.org/10.3390/s24134283","url":null,"abstract":"Based on the need for real-time sag monitoring of Overhead Power Lines (OPL) for electricity transmission, this article presents the implementation of a hardware and software system for online monitoring of OPL cables. The mathematical model based on differential equations and the methods of algorithmic calculation of OPL cable sag are presented. Considering that, based on the mathematical model presented, the calculation of cable sag can be done in different ways depending on the sensors used, and the presented application uses a variety of sensors. Therefore, a direct calculation is made using one of the different methods. Subsequently, the verification relations are highlighted directly, and in return, the calculation by the alternative method, which uses another group of sensors, generates both a verification of the calculation and the functionality of the sensors, thus obtaining a defect observer of the sensors. The hardware architecture of the OPL cable online monitoring application is presented, together with the main characteristics of the sensors and communication equipment used. The configurations required to transmit data using the ModBUS and ZigBee protocols are also presented. The main software modules of the OPL cable condition monitoring application are described, which ensure the monitoring of the main parameters of the power line and the visualisation of the results both on the electricity provider’s intranet using a web server and MySQL database, and on the Internet using an Internet of Things (IoT) server. This categorisation of the data visualisation mode is done in such a way as to ensure a high level of cyber security. Also, the global accuracy of the entire OPL cable sag calculus system is estimated at 0.1%. Starting from the mathematical model of the OPL cable sag calculation, it goes through the stages of creating such a monitoring system, from the numerical simulations carried out using Matlab to the real-time implementation of this monitoring application using Laboratory Virtual Instrument Engineering Workbench (LabVIEW).","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A point cloud is a representation of objects or scenes utilising unordered points comprising 3D positions and attributes. The ability of point clouds to mimic natural forms has gained significant attention from diverse applied fields, such as virtual reality and augmented reality. However, the point cloud, especially those representing dynamic scenes or objects in motion, must be compressed efficiently due to its huge data volume. The latest video-based point cloud compression (V-PCC) standard for dynamic point clouds divides the 3D point cloud into many patches using computationally expensive normal estimation, segmentation, and refinement. The patches are projected onto a 2D plane to apply existing video coding techniques. This process often results in losing proximity information and some original points. This loss induces artefacts that adversely affect user perception. The proposed method segments dynamic point clouds based on shape similarity and occlusion before patch generation. This segmentation strategy helps maintain the points’ proximity and retain more original points by exploiting the density and occlusion of the points. The experimental results establish that the proposed method significantly outperforms the V-PCC standard and other relevant methods regarding rate–distortion performance and subjective quality testing for both geometric and texture data of several benchmark video sequences.
{"title":"Improved Video-Based Point Cloud Compression via Segmentation","authors":"Faranak Tohidi, Manoranjan Paul, Anwaar Ulhaq, Subrata Chakraborty","doi":"10.3390/s24134285","DOIUrl":"https://doi.org/10.3390/s24134285","url":null,"abstract":"A point cloud is a representation of objects or scenes utilising unordered points comprising 3D positions and attributes. The ability of point clouds to mimic natural forms has gained significant attention from diverse applied fields, such as virtual reality and augmented reality. However, the point cloud, especially those representing dynamic scenes or objects in motion, must be compressed efficiently due to its huge data volume. The latest video-based point cloud compression (V-PCC) standard for dynamic point clouds divides the 3D point cloud into many patches using computationally expensive normal estimation, segmentation, and refinement. The patches are projected onto a 2D plane to apply existing video coding techniques. This process often results in losing proximity information and some original points. This loss induces artefacts that adversely affect user perception. The proposed method segments dynamic point clouds based on shape similarity and occlusion before patch generation. This segmentation strategy helps maintain the points’ proximity and retain more original points by exploiting the density and occlusion of the points. The experimental results establish that the proposed method significantly outperforms the V-PCC standard and other relevant methods regarding rate–distortion performance and subjective quality testing for both geometric and texture data of several benchmark video sequences.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous outdoor moving objects like cars, motorcycles, bicycles, and pedestrians present different risks to the safety of Visually Impaired People (VIPs). Consequently, many camera-based VIP mobility assistive solutions have resulted. However, they fail to guarantee VIP safety in practice, i.e., they cannot effectively prevent collisions with more dangerous threats moving at higher speeds, namely, Critical Moving Objects (CMOs). This paper presents the first practical camera-based VIP mobility assistant scheme, ARAware, that effectively identifies CMOs in real-time to give the VIP more time to avoid danger through simultaneously addressing CMO identification, CMO risk level evaluation and classification, and prioritised CMO warning notification. Experimental results based on our real-world prototype demonstrate that ARAware accurately identifies CMOs (with 97.26% mAR and 88.20% mAP) in real-time (with a 32 fps processing speed for 30 fps incoming video). It precisely classifies CMOs according to their risk levels (with 100% mAR and 91.69% mAP), and warns in a timely manner about high-risk CMOs while effectively reducing false alarms by postponing the warning of low-risk CMOs. Compared to the closest state-of-the-art approach, DEEP-SEE, ARAware achieves significantly higher CMO identification accuracy (by 42.62% in mAR and 10.88% in mAP), with a 93% faster end-to-end processing speed.
{"title":"ARAware: Assisting Visually Impaired People with Real-Time Critical Moving Object Identification","authors":"Hadeel Surougi, Cong Zhao, Julie A. McCann","doi":"10.3390/s24134282","DOIUrl":"https://doi.org/10.3390/s24134282","url":null,"abstract":"Autonomous outdoor moving objects like cars, motorcycles, bicycles, and pedestrians present different risks to the safety of Visually Impaired People (VIPs). Consequently, many camera-based VIP mobility assistive solutions have resulted. However, they fail to guarantee VIP safety in practice, i.e., they cannot effectively prevent collisions with more dangerous threats moving at higher speeds, namely, Critical Moving Objects (CMOs). This paper presents the first practical camera-based VIP mobility assistant scheme, ARAware, that effectively identifies CMOs in real-time to give the VIP more time to avoid danger through simultaneously addressing CMO identification, CMO risk level evaluation and classification, and prioritised CMO warning notification. Experimental results based on our real-world prototype demonstrate that ARAware accurately identifies CMOs (with 97.26% mAR and 88.20% mAP) in real-time (with a 32 fps processing speed for 30 fps incoming video). It precisely classifies CMOs according to their risk levels (with 100% mAR and 91.69% mAP), and warns in a timely manner about high-risk CMOs while effectively reducing false alarms by postponing the warning of low-risk CMOs. Compared to the closest state-of-the-art approach, DEEP-SEE, ARAware achieves significantly higher CMO identification accuracy (by 42.62% in mAR and 10.88% in mAP), with a 93% faster end-to-end processing speed.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenyu Xu, Zijing Wu, Linlin Wang, Ziyue Ma, Juan Deng, Hong Sha, Hong Wang
This study aims to integrate a convolutional neural network (CNN) and the Random Forest Model into a rehabilitation assessment device to provide a comprehensive gait analysis in the evaluation of movement disorders to help physicians evaluate rehabilitation progress by distinguishing gait characteristics under different walking modes. Equipped with accelerometers and six-axis force sensors, the device monitors body symmetry and upper limb strength during rehabilitation. Data were collected from normal and abnormal walking groups. A knee joint limiter was applied to subjects to simulate different levels of movement disorders. Features were extracted from the collected data and analyzed using a CNN. The overall performance was scored with Random Forest Model weights. Significant differences in average acceleration values between the moderately abnormal (MA) and severely abnormal (SA) groups (without vehicle assistance) were observed (p < 0.05), whereas no significant differences were found between the MA with vehicle assistance (MA-V) and SA with vehicle assistance (SA-V) groups (p > 0.05). Force sensor data showed good concentration in the normal walking group and more scatter in the SA-V group. The CNN and Random Forest Model accurately recognized gait conditions, achieving average accuracies of 88.4% and 92.3%, respectively, proving that the method mentioned above provides more accurate gait evaluations for patients with movement disorders.
本研究旨在将卷积神经网络(CNN)和随机森林模型整合到康复评估设备中,在运动障碍评估中提供全面的步态分析,通过区分不同行走模式下的步态特征,帮助医生评估康复进展。该设备配备了加速度计和六轴力传感器,可监测康复过程中的身体对称性和上肢力量。从正常行走组和异常行走组收集数据。对受试者施加了膝关节限制器,以模拟不同程度的运动障碍。从收集的数据中提取特征,并使用 CNN 进行分析。使用随机森林模型权重对整体性能进行评分。观察到中度异常(MA)组和严重异常(SA)组(无车辆辅助)之间的平均加速度值存在显著差异(p < 0.05),而有车辆辅助的 MA 组(MA-V)和有车辆辅助的 SA 组(SA-V)之间则无显著差异(p > 0.05)。力传感器数据在正常行走组显示出良好的集中性,而在 SA-V 组则显示出更大的分散性。CNN 和随机森林模型能准确识别步态状况,平均准确率分别达到 88.4% 和 92.3%,证明上述方法能为运动障碍患者提供更准确的步态评估。
{"title":"Research on Monitoring Assistive Devices for Rehabilitation of Movement Disorders through Multi-Sensor Analysis Combined with Deep Learning","authors":"Zhenyu Xu, Zijing Wu, Linlin Wang, Ziyue Ma, Juan Deng, Hong Sha, Hong Wang","doi":"10.3390/s24134273","DOIUrl":"https://doi.org/10.3390/s24134273","url":null,"abstract":"This study aims to integrate a convolutional neural network (CNN) and the Random Forest Model into a rehabilitation assessment device to provide a comprehensive gait analysis in the evaluation of movement disorders to help physicians evaluate rehabilitation progress by distinguishing gait characteristics under different walking modes. Equipped with accelerometers and six-axis force sensors, the device monitors body symmetry and upper limb strength during rehabilitation. Data were collected from normal and abnormal walking groups. A knee joint limiter was applied to subjects to simulate different levels of movement disorders. Features were extracted from the collected data and analyzed using a CNN. The overall performance was scored with Random Forest Model weights. Significant differences in average acceleration values between the moderately abnormal (MA) and severely abnormal (SA) groups (without vehicle assistance) were observed (p < 0.05), whereas no significant differences were found between the MA with vehicle assistance (MA-V) and SA with vehicle assistance (SA-V) groups (p > 0.05). Force sensor data showed good concentration in the normal walking group and more scatter in the SA-V group. The CNN and Random Forest Model accurately recognized gait conditions, achieving average accuracies of 88.4% and 92.3%, respectively, proving that the method mentioned above provides more accurate gait evaluations for patients with movement disorders.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Bisri Musthafa, Samsul Huda, Yuta Kodera, Md. Arshad Ali, Shunsuke Araki, Jedidah Mwaura, Yasuyuki Nogami
Internet of Things (IoT) devices are leading to advancements in innovation, efficiency, and sustainability across various industries. However, as the number of connected IoT devices increases, the risk of intrusion becomes a major concern in IoT security. To prevent intrusions, it is crucial to implement intrusion detection systems (IDSs) that can detect and prevent such attacks. IDSs are a critical component of cybersecurity infrastructure. They are designed to detect and respond to malicious activities within a network or system. Traditional IDS methods rely on predefined signatures or rules to identify known threats, but these techniques may struggle to detect novel or sophisticated attacks. The implementation of IDSs with machine learning (ML) and deep learning (DL) techniques has been proposed to improve IDSs’ ability to detect attacks. This will enhance overall cybersecurity posture and resilience. However, ML and DL techniques face several issues that may impact the models’ performance and effectiveness, such as overfitting and the effects of unimportant features on finding meaningful patterns. To ensure better performance and reliability of machine learning models in IDSs when dealing with new and unseen threats, the models need to be optimized. This can be done by addressing overfitting and implementing feature selection. In this paper, we propose a scheme to optimize IoT intrusion detection by using class balancing and feature selection for preprocessing. We evaluated the experiment on the UNSW-NB15 dataset and the NSL-KD dataset by implementing two different ensemble models: one using a support vector machine (SVM) with bagging and another using long short-term memory (LSTM) with stacking. The results of the performance and the confusion matrix show that the LSTM stacking with analysis of variance (ANOVA) feature selection model is a superior model for classifying network attacks. It has remarkable accuracies of 96.92% and 99.77% and overfitting values of 0.33% and 0.04% on the two datasets, respectively. The model’s ROC is also shaped with a sharp bend, with AUC values of 0.9665 and 0.9971 for the UNSW-NB15 dataset and the NSL-KD dataset, respectively.
{"title":"Optimizing IoT Intrusion Detection Using Balanced Class Distribution, Feature Selection, and Ensemble Machine Learning Techniques","authors":"Muhammad Bisri Musthafa, Samsul Huda, Yuta Kodera, Md. Arshad Ali, Shunsuke Araki, Jedidah Mwaura, Yasuyuki Nogami","doi":"10.3390/s24134293","DOIUrl":"https://doi.org/10.3390/s24134293","url":null,"abstract":"Internet of Things (IoT) devices are leading to advancements in innovation, efficiency, and sustainability across various industries. However, as the number of connected IoT devices increases, the risk of intrusion becomes a major concern in IoT security. To prevent intrusions, it is crucial to implement intrusion detection systems (IDSs) that can detect and prevent such attacks. IDSs are a critical component of cybersecurity infrastructure. They are designed to detect and respond to malicious activities within a network or system. Traditional IDS methods rely on predefined signatures or rules to identify known threats, but these techniques may struggle to detect novel or sophisticated attacks. The implementation of IDSs with machine learning (ML) and deep learning (DL) techniques has been proposed to improve IDSs’ ability to detect attacks. This will enhance overall cybersecurity posture and resilience. However, ML and DL techniques face several issues that may impact the models’ performance and effectiveness, such as overfitting and the effects of unimportant features on finding meaningful patterns. To ensure better performance and reliability of machine learning models in IDSs when dealing with new and unseen threats, the models need to be optimized. This can be done by addressing overfitting and implementing feature selection. In this paper, we propose a scheme to optimize IoT intrusion detection by using class balancing and feature selection for preprocessing. We evaluated the experiment on the UNSW-NB15 dataset and the NSL-KD dataset by implementing two different ensemble models: one using a support vector machine (SVM) with bagging and another using long short-term memory (LSTM) with stacking. The results of the performance and the confusion matrix show that the LSTM stacking with analysis of variance (ANOVA) feature selection model is a superior model for classifying network attacks. It has remarkable accuracies of 96.92% and 99.77% and overfitting values of 0.33% and 0.04% on the two datasets, respectively. The model’s ROC is also shaped with a sharp bend, with AUC values of 0.9665 and 0.9971 for the UNSW-NB15 dataset and the NSL-KD dataset, respectively.","PeriodicalId":21698,"journal":{"name":"Sensors","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}