Neural additive models (NAMs) have attracted increasing attention recently due to their promising interpretability and approximation ability. However, existing works on NAMs are typically limited to the mean squared error (MSE) criterion, which can suffer from degraded performance when confronted with data containing non-Gaussian noise, such as outliers and heavy-tailed noise. To address this issue, we utilize maximum likelihood estimation for error modeling and formulate noise distribution-aware additive models, called the Maximum Likelihood Neural Additive Models (ML-NAM). It employs kernel density estimation to avoid explicit assumptions about the noise distribution, allowing it to adapt flexibly to diverse noise environments. Theoretically, the excess risk bounds are established for ML-NAM under mild conditions, and the resulting minimax convergence rate exhibits polynomial decay when the target function lies in a Besov space. Empirically, extensive experiments validate the effectiveness and robustness of the proposed ML-NAM in comparison to several state-of-the-art approaches. Across multiple datasets, ML-NAM practically reduces MSE by - compared to NAM under non-Gaussian noise. This work enables reliable decision-making in high-stakes domains where robustness and interpretability are essential for real-world applications.
扫码关注我们
求助内容:
应助结果提醒方式:
