Majid Haji Bagheri, Emma Gu, Asif Abdullah Khan, Yanguang Zhang, Gaozhi Xiao, Mohammad Nankali, Peng Peng, Pengcheng Xi, Dayan Ban
{"title":"Machine Learning-Enabled Triboelectric Nanogenerator for Continuous Sound Monitoring and Captioning","authors":"Majid Haji Bagheri, Emma Gu, Asif Abdullah Khan, Yanguang Zhang, Gaozhi Xiao, Mohammad Nankali, Peng Peng, Pengcheng Xi, Dayan Ban","doi":"10.1002/adsr.202400156","DOIUrl":null,"url":null,"abstract":"<p>Advancements in live audio processing, specifically in sound classification and audio captioning technologies, have widespread applications ranging from surveillance to accessibility services. However, traditional methods encounter scalability and energy efficiency challenges. To overcome these, Triboelectric Nanogenerators (TENG) are explored for energy harvesting, particularly in live-streaming sound monitoring systems. This study introduces a sustainable methodology integrating TENG-based sensors into live sound monitoring pipelines, enhancing energy-efficient sound classification and captioning by model selection and fine-tuning strategies. Our cost-effective TENG sensor harvests ambient sound vibrations and background noise, producing up to 1.2 <i>µ</i>W cm<sup>−2</sup> output power and successfully charging capacitors. This shows its capability for sustainable energy harvesting. The system achieves 94.3% classification accuracy using the Hierarchical Token Semantic Audio Transformer (HTS-AT) model identified as optimal for live sound event monitoring. Additionally, continuous audio captioning using the EnCodec Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning model (EnCLAP) showcases rapid and precise processing capabilities that are suitable for live-streaming environments. The Bidirectional Encoder representation from the Audio Transformers (BEATs) model also demonstrated exceptional performance, achieving an accuracy of 97.25%. These models were fine-tuned using the TENG-recorded ESC-50 dataset, ensuring the system's adaptability to diverse sound conditions. Overall, this research significantly contributes to the development of energy-efficient sound monitoring systems with wide-ranging implications across various sectors.</p>","PeriodicalId":100037,"journal":{"name":"Advanced Sensor Research","volume":"4 2","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/adsr.202400156","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Sensor Research","FirstCategoryId":"1085","ListUrlMain":"https://advanced.onlinelibrary.wiley.com/doi/10.1002/adsr.202400156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Advancements in live audio processing, specifically in sound classification and audio captioning technologies, have widespread applications ranging from surveillance to accessibility services. However, traditional methods encounter scalability and energy efficiency challenges. To overcome these, Triboelectric Nanogenerators (TENG) are explored for energy harvesting, particularly in live-streaming sound monitoring systems. This study introduces a sustainable methodology integrating TENG-based sensors into live sound monitoring pipelines, enhancing energy-efficient sound classification and captioning by model selection and fine-tuning strategies. Our cost-effective TENG sensor harvests ambient sound vibrations and background noise, producing up to 1.2 µW cm−2 output power and successfully charging capacitors. This shows its capability for sustainable energy harvesting. The system achieves 94.3% classification accuracy using the Hierarchical Token Semantic Audio Transformer (HTS-AT) model identified as optimal for live sound event monitoring. Additionally, continuous audio captioning using the EnCodec Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning model (EnCLAP) showcases rapid and precise processing capabilities that are suitable for live-streaming environments. The Bidirectional Encoder representation from the Audio Transformers (BEATs) model also demonstrated exceptional performance, achieving an accuracy of 97.25%. These models were fine-tuned using the TENG-recorded ESC-50 dataset, ensuring the system's adaptability to diverse sound conditions. Overall, this research significantly contributes to the development of energy-efficient sound monitoring systems with wide-ranging implications across various sectors.
实时音频处理的进步,特别是声音分类和音频字幕技术的进步,已经广泛应用于从监控到无障碍服务的各个领域。然而,传统的方法遇到了可扩展性和能效方面的挑战。为了克服这些问题,研究人员探索了摩擦电纳米发电机(TENG)用于能量收集,特别是在实时声音监测系统中。本研究介绍了一种可持续的方法,将基于teng的传感器集成到实时声音监测管道中,通过模型选择和微调策略增强节能声音分类和字幕。我们具有成本效益的TENG传感器收集环境声音振动和背景噪声,产生高达1.2 μ W cm - 2的输出功率,并成功为电容器充电。这显示了其可持续能源收集的能力。该系统使用分层令牌语义音频转换器(HTS-AT)模型实现了94.3%的分类准确率,该模型被认为是实时声音事件监测的最佳模型。此外,使用结合神经音频编解码器和音频-文本联合嵌入的自动音频字幕模型(EnCLAP)的编码器进行连续音频字幕,展示了适合直播环境的快速和精确的处理能力。来自音频变压器(BEATs)模型的双向编码器表示也表现出了出色的性能,达到了97.25%的准确率。这些模型使用teng记录的ESC-50数据集进行微调,确保系统对不同声音条件的适应性。总的来说,这项研究极大地促进了对各个部门具有广泛影响的节能声音监测系统的发展。