The rise of security concerns in conventional centralized learning has driven the adoption of federated learning. However, the risks posed by poisoning attacks from internal adversaries against federated systems necessitate robust anti-poisoning frameworks. While previous defensive mechanisms relied on outlier detection, recent approaches focus on latent space representation. In this paper, we investigate a novel robust aggregation method for federated learning, namely Fed-LSAE, which leverages latent space representation via the penultimate layer and Autoencoder to exclude malicious clients from the training process. Specifically, Fed-LSAE measures the similarity level of each local latent space vector to the global one using the Center Kernel Alignment algorithm in every training round. The results of this algorithm are categorized into benign and attack groups, in which only the benign cluster is sent to the central server for federated averaging aggregation. In other words, adversaries would be detected and eliminated from the federated training procedure. The experimental results on the CIC-ToN-IoT and N-BaIoT datasets confirm the feasibility of our defensive mechanism against cutting-edge poisoning attacks for developing a robust federated-based threat detector in the Internet of Things (IoT) context. The evaluation of the federated approach witnesses an upward trend of approximately 98% across all metrics when integrating with our Fed-LSAE defense.