Federated learning (FL) enables collaborative model training without sharing private data, thereby potentially meeting the growing demand for data privacy protection. Despite its potentials, FL also poses challenges in achieving privacy-preservation and Byzantine-robustness when handling sensitive data. To address these challenges, we present a novel Secure Aggregation Mechanism for Federated Learning with Byzantine-Robustness by Functional Encryption (SAMFL). Our approach designs a novel dual-decryption multi-input functional encryption (DD-MIFE) scheme, which enables efficient computation of cosine similarities and aggregation of encrypted gradients through a single ciphertext. This innovative scheme allows for dual decryption, producing distinct results based on different keys, while maintaining high efficiency. We further propose TF-Init, integrating DD-MIFE with Truth Discovery (TD) to eliminate the reliance on a root dataset. Additionally, we devise a secure cosine similarity calculation aggregation protocol (SC2AP) using DD-MIFE, ensuring privacy-preserving and Byzantine-robust FL secure aggregation. To enhance FL efficiency, we employ single instruction multiple data (SIMD) to parallelize encryption and decryption processes. Concurrently, to preserve accuracy, we incorporate differential privacy (DP) with selective clipping of model layers within the FL framework. Finally, we integrate TF-Init, SC2AP, SIMD, and DP to construct SAMFL. Extensive experiments demonstrate that SAMFL successfully defends against both inference attacks and poisoning attacks, while improving efficiency and accuracy compared to existing methods. SAMFL provides a comprehensive integrated solution for FL with efficiency, accuracy, privacy-preservation, and robustness.