Symptoms of anxiety and depression are common in children. There are limitations that prevent the implementation of evidenced-based interventions in national mental health services. Additionally, a community setting may involve factors that limit effectiveness compared to a research setting.
Objectives: This study is the first to evaluate the medium- and long-term effectiveness of the Super Skills for Life (SSL) transdiagnostic program integrated into routine clinical practice in Spain, thereby addressing the critical gap between controlled trials and real-world implementation.
Method: 43 children (mean = 10.21 years, 65.1% boys) with an emotional disorder participated.
Results: Children who met criteria for an emotional disorder decreased significantly at all follow-up visits after the intervention (3-, 6- and 12-months). In addition, significantly lower scores were found for symptoms of depression, anxiety, and global difficulties. Reductions in anxiety symptoms and global difficulties were statistically significant at 3-, 6-, and 12-months follow-up.
Conclusions: The SSL program, when implemented in routine mental health services, reduces children's symptoms and diagnoses and demonstrates the feasibility of applying a transdiagnostic, group-based intervention in real-world clinical settings. Beyond clinical improvements, the design-anchored in multi-informant evidence, fidelity monitoring, and longitudinal follow-up-provides a replicable model for evaluating child mental health interventions under real-world conditions, strengthening both scientific validity and clinical utility.
This surgical rehabilitation program in a community setting provides a case study for the implementation of an effective monitoring and evaluation (M&E) plan. The partnership between St. Catherine University evaluators and staff at Kafika House Tanzania offers insights into how to strengthen evaluation and planning for a rehabilitation program and provides a case study for working in intercultural partnerships. This paper assesses the responsiveness, quality and fidelity of M&E implementation, and elucidates lessons learned from evaluation implementation and key adaptations from piloting data collection in a resource-limited setting. Feedback interviews with Kafika House staff demonstrated responsiveness to the M&E tools. Pilot data collected from tools developed for program and impact evaluation were used to measure implementation quality and fidelity. Data collector buy-in and responsive and communicative partnership supports motivation for consistent data collection. Kafika House learned that program evaluation was imperative to data quality and fidelity while the academic team learned how to best support the evaluation plan. Utilizing an ecological model strengthened our consideration of the individual, community, and innovative factors of implementation that promoted the sustainability of the M&E program. Evaluation and planning adaptations in a resource-limited setting considered training, funding, sustainable integration, and administration of measurement tools.

