Audio-based fault diagnosis identifies machine operating conditions through acoustic signals, enabling targeted maintenance and reducing downtime in smart manufacturing and embodied intelligence. The traditional Leaky-Integrated-and-Fire (LIF) function in neural networks improves fault state classification by removing shared information while preserving category-unique features. However, its threshold, a backpropagation-optimized parameter that governs the information removal pattern, becomes a fixed constant after training. This constant threshold enforces a uniform information removal pattern across all audio samples despite the significant variations in time–frequency characteristics. Motivated by this and inspired by the auditory system’s adaptive modulation, in contrast to the traditional constant threshold, where the threshold remains a constant after training, this paper proposes a learnable Adaptive Threshold, allowing the threshold to dynamically adapt to the input audio even after training. As the threshold adapts to different inputs rather than remaining a fixed constant, more unique information can be preserved to enhance classification accuracy. The results demonstrate that the adaptive threshold outperforms the constant threshold and other state-of-the-art methods, achieving 99.75% on the IDMT Engine dataset and 98.11% on the MIMII Pump dataset. Visualization results confirm that while both the adaptive threshold and constant threshold successfully suppress non-unique background sounds, such as flowing water, the adaptive threshold demonstrates superior performance in preserving unique features, such as the impact sound from a broken pump. This capability contributes to more accurate fault diagnosis, further validating the effectiveness of the proposed method.
扫码关注我们
求助内容:
应助结果提醒方式:
