Machine learning (ML) techniques have demonstrated considerable effectiveness when integrated into routing protocols to enhance the performance of Smart Grid Networks. However, their performance across diverse real-world scenarios remains a topic of exploration. In this study, we evaluate the performance and transferability of four ML models—Long Short-Term Memory (LSTM), Feedforward Neural Network (FF), Decision Trees, and Naive Bayes—across three distinct scenarios: Barcelona, Montreal, and Rome. Through rigorous experimentation and analysis, we analyze the varying efficacy of these models in different scenarios. Our results demonstrate that LSTM outperforms other models in the Montreal and Rome scenarios, highlighting its effectiveness in predicting the optimal forwarding node for packet transmission. In contrast, Ensemble of Bagged Decision Trees emerge as the optimal model for the Barcelona scenario, exhibiting strong performance in selecting the most suitable forwarding node for packet transmission. However, the transferability of these models to scenarios where they were not trained is notably limited, as evidenced by their decreased performance on datasets from other scenarios. This observation underscores the importance of considering the data characteristics when selecting ML models for real-world applications. Furthermore, we identify that the distribution of nodes within datasets significantly influences model performance, highlighting its critical role in determining model efficacy. These insights contribute to a deeper understanding of the challenges inherent in transferring ML models between real-world scenarios, providing valuable guidance for practitioners and researchers alike in optimizing ML applications in Smart Grid Networks.