In this letter, a low-cost but high-fidelity rolling tactile system is proposed for distinguishing patterns on curved surfaces, including an improved vision-based tactile sensor (VBTS) and a novel lightweight processing framework. The proposed VBTS contains a modular ring-shaped illumination configuration and an improved sensing elastomer, which is easy to fabricate without complex processing and costs only 16.95 USD in total. To achieve real-time data processing of rolling tactile images, inspired by event-based cameras, an efficient processing framework is introduced based on computer graphics, which can integrate sparse rolling tactile images into complete high-fidelity images for the final classification. To evaluate the effectiveness of the proposed system, a classification model is trained using a dataset generated by 13 cylinders with similar textures, where the identification accuracy of validation is up to 98.3%. Then, we test each cylinder sample for three rolling tactile perceptions and achieve 100% identification accuracy within 1.2 s on average, indicating a promising prospect of the proposed perception system for real-time application.
{"title":"A High-Fidelity, Low-Cost Visuotactile Sensor for Rolling Tactile Perception","authors":"Lintao Xie;Guitao Yu;Tianhong Tong;Yang He;Dongtai Liang","doi":"10.1109/LSENS.2024.3477913","DOIUrl":"https://doi.org/10.1109/LSENS.2024.3477913","url":null,"abstract":"In this letter, a low-cost but high-fidelity rolling tactile system is proposed for distinguishing patterns on curved surfaces, including an improved vision-based tactile sensor (VBTS) and a novel lightweight processing framework. The proposed VBTS contains a modular ring-shaped illumination configuration and an improved sensing elastomer, which is easy to fabricate without complex processing and costs only 16.95 USD in total. To achieve real-time data processing of rolling tactile images, inspired by event-based cameras, an efficient processing framework is introduced based on computer graphics, which can integrate sparse rolling tactile images into complete high-fidelity images for the final classification. To evaluate the effectiveness of the proposed system, a classification model is trained using a dataset generated by 13 cylinders with similar textures, where the identification accuracy of validation is up to 98.3%. Then, we test each cylinder sample for three rolling tactile perceptions and achieve 100% identification accuracy within 1.2 s on average, indicating a promising prospect of the proposed perception system for real-time application.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"8 11","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) advancements have provided significant benefits to the agriculture sector in rationing water usage and monitoring the growth of vegetation. This article presents an efficient and scalable IoT framework for smart farming. It is based on a wireless sensor actuator network (WSAN) that logs the farm's environmental parameters into a network control center for processing and monitoring. Furthermore, a new addressing scheme for the WSAN nodes is proposed, which features the scalability of the proposed solution. To test and evaluate the architecture's performance, simulations are conducted to measure water consumption and time to network failure. Results confirm the efficiency and the reliability of the proposed scalable network as a proof of concept of the proposed work.
{"title":"An Efficient and Scalable Internet of Things Framework for Smart Farming","authors":"Imad Jawhar;Samar Sindian;Sara Shreif;Mahmoud Ezzdine;Bilal Hammoud","doi":"10.1109/LSENS.2024.3476940","DOIUrl":"https://doi.org/10.1109/LSENS.2024.3476940","url":null,"abstract":"Internet of Things (IoT) advancements have provided significant benefits to the agriculture sector in rationing water usage and monitoring the growth of vegetation. This article presents an efficient and scalable IoT framework for smart farming. It is based on a wireless sensor actuator network (WSAN) that logs the farm's environmental parameters into a network control center for processing and monitoring. Furthermore, a new addressing scheme for the WSAN nodes is proposed, which features the scalability of the proposed solution. To test and evaluate the architecture's performance, simulations are conducted to measure water consumption and time to network failure. Results confirm the efficiency and the reliability of the proposed scalable network as a proof of concept of the proposed work.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"8 11","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142450990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1109/LSENS.2024.3475515
Hui Zheng;Ye-Sheng Zhao;Bo Zhang;Guo-Qiang Shang
With the popularization of sensors and the development of pose estimation algorithms, a skeleton-based action recognition task has gradually become mainstream in human action recognition tasks. The key to solving skeleton-based action recognition task is to extract feature representations that can accurately outline the characteristics of human actions from sensor data. In this letter, we propose a separable spatial-temporal graph learning approach, which is composed of independent spatial and temporal graph networks. In the spatial graph network, spectral-based graph convolutional network is selected to mine spatial features of each moment. In the temporal graph network, a global-local attention mechanism is embedded to excavate interdependence at different times. Extensive experiments are carried out on the NTU-RGB+D and NTU-RGB+D 120 datasets, and the results show that our proposed method outperforms several other baselines.
{"title":"A Separable Spatial–Temporal Graph Learning Approach for Skeleton-Based Action Recognition","authors":"Hui Zheng;Ye-Sheng Zhao;Bo Zhang;Guo-Qiang Shang","doi":"10.1109/LSENS.2024.3475515","DOIUrl":"https://doi.org/10.1109/LSENS.2024.3475515","url":null,"abstract":"With the popularization of sensors and the development of pose estimation algorithms, a skeleton-based action recognition task has gradually become mainstream in human action recognition tasks. The key to solving skeleton-based action recognition task is to extract feature representations that can accurately outline the characteristics of human actions from sensor data. In this letter, we propose a separable spatial-temporal graph learning approach, which is composed of independent spatial and temporal graph networks. In the spatial graph network, spectral-based graph convolutional network is selected to mine spatial features of each moment. In the temporal graph network, a global-local attention mechanism is embedded to excavate interdependence at different times. Extensive experiments are carried out on the NTU-RGB+D and NTU-RGB+D 120 datasets, and the results show that our proposed method outperforms several other baselines.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"8 11","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142517898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-02DOI: 10.1109/LSENS.2024.3473688
Luke J. Weaver;S. M. Bhagya P. Samarakoon;M. A. Viraj J. Muthugala;Mohan Rajesh Elara;Zaki S. Saldi
Liquid-carrying robots require slosh suppression methods to improve their performance. To design these systems requires effective slosh measurement. State-of-the-Art slosh estimation methods have limitations, which include solely handling unidirectional motion or relying on theoretical models. This letter proposes a novel sensor array for measuring sloshing in liquid-carrying mobile robots. The proposed system offers two key contributions: first, it enables comprehensive measurement and visualization of sloshing during omnidirectional movements, and second, it provides a compact and seamless integration into mobile robots, enabling them to mitigate the adverse effects of sloshing. The sensor system has been developed using 14 time-of-flight range sensors. The range sensors are connected to an Arduino Mega through I $^{2}$