Situation awareness is critical for successful decision-making in safety–critical and mission-critical environments such as air traffic and electric power control rooms. Situation awareness models provide high explainability; however, the decision support systems based on these models require the intervention of experts for initial configuration and evolutionary maintenance tasks, which are generally costly. Reinforcement learning is a machine learning strategy that considers how software agents act in an environment to maximize some cumulative reward by improving performance through experience. We investigated how reinforcement learning can help experts configure and maintain situation awareness models. This work proposes the Reinforcement Learning Situation Awareness (RLSA) method to automate the initial and evolving set-ups of the cognitive model’s belief parameters of situation awareness models employed by decision support systems using reinforcement learning techniques. Tests applying the method on a simulated case study and public datasets with distinct evolving and non-evolving conditions, using accuracy and other metrics, show promising results compared to those found in literature, including baseline Naïve Bayes and Decision Tree algorithms. The effectiveness in automating the parameter adjustments shown by RLSA reduces the demand for specialized work in applications with evolving behavior while maintaining the explainable cognitive characteristics of situation awareness models.