Privacy-preserving data aggregation could be well applied in federated learning, enabling an aggregator to learn a specified fusion statistics over private data held by clients. Besides, robustness is a critical requirement in federated learning, since a malicious client is able to readily launch poisoning attacks by submitting artificial and malformed model updates to central server. To this end, we present a robust privacy-preserving data aggregation protocol based on distributed trust model, which achieves privacy protection by three-party computation based on replicated secret sharing with honest-majority. The protocol also achieves robustness by securely computing an input validation strategy called norm bounding, including -norm and -norm bounding, which has been proven effective to defend against poisoning attacks. Following the best practice of hybrid protocol design, we exploit both Boolean sharing and arithmetic sharing to efficiently enforce and -norm bounding respectively. Additionally, we propose a novel share conversion protocol converting Boolean shares into arithmetic ones, which is of independent interest and could be used in other protocols. We provide security analysis of the protocol based on standard simulation paradigm and modular composition theorem, reaching the conclusion that presented protocol achieves secure aggregation functionality with norm bounding with computational security in the presence of one static semi-honest server. Comprehensive efficiency analysis and empirical experiments demonstrate its superiority compared with related protocols.