Near-infrared (NIR) spectroscopy is a key analytical tool across industries, providing fast, non-destructive measurements. However, traditional centralized models face key challenges regarding data privacy, instrument heterogeneity, and limited inter-institutional collaboration. We present a decentralized federated learning (DFL) system for NIR spectroscopy that enables institutions to collaboratively train accurate models without sharing raw data. The proposed system combines standardized spectral preprocessing with lightweight communication protocols to achieve modeling efficiency and data confidentiality. Extensive experiments were conducted on augmented Corn and Gasoline datasets using PLSR, SVR, and 1D-CNN models. In our simulations, we modeled a network of 30 clients communicating via a ring topology and applied FedProx regularization (μ = 0.1). The proposed DFL system produces predictions within 5–8 % of centralized results, while its architecture inherently offers improved scalability, fault tolerance, and privacy protection. The combination of FedProx and model personalization preserves training stability under non-IID data conditions, recovering 20 % of lost accuracy. In cross-instrument scenarios, the DFL approach outperforms both local-only and standard centralized FL models, reducing prediction errors by up to 52 % and showing strong generalization to new devices. While DFL requires more training rounds, system efficiency analysis shows its total communication cost is 25 % lower than centralized FL. Our research indicates DFL as a promising and practical approach for NIR spectroscopy, offering privacy, scalability, and generalizability for real-world, multi-party deployments with heterogeneous devices. However, performance can decline under extreme data heterogeneity, highlighting the need for further enhancements in model personalization.
扫码关注我们
求助内容:
应助结果提醒方式:
