Background: Military veterans may be at increased risk of posttraumatic stress disorder (PTSD) compared to the general population. PTSD is often comorbid with harmful and problematic patterns of gambling. Behavioral therapies such as acceptance and commitment therapy have shown promise in treating these co-occurring disorders, especially if combined with mobile health (mHealth) interventions to circumvent known help-seeking barriers faced by veterans. However, to date, recruitment for mHealth interventions has been challenging and may impact intervention feasibility.
Objective: In this paper, our objectives were to describe the strategies used to recruit UK military veterans with PTSD or experience of harmful gambling to a pilot study of a smartphone-based digital intervention, ACT Vet.
Methods: We used several recruitment strategies, such as direct mailing, paid study advertising on social media (Facebook) and an online research platform (Prolific), study-specific website management, in-person event hosting with veterans' charities, snowball sampling, and incentives for completion.
Results: Results showed that, over 27 days, recruitment through Facebook accounted for 21 eligible veterans (n=7, 33% through unpaid advertising and n=14, 67% through paid advertising), whereas Prolific accounted for 50 veterans. Additional strategies recruited 8 eligible veterans. In total, 79 eligible military veterans were recruited for ACT Vet, with 24 (30%) completing the final steps of the study.
Conclusions: Difficulties such as low advertisement conversion rate and participant and data attrition arose throughout this study. Our findings illustrate the relative effectiveness of social media- and online platform-based initiatives in recruiting veterans with PTSD or harmful gambling. Future research should consider establishing an online presence for effective digital intervention recruitment with diverse branding to attract representative samples of veterans for mHealth research.
Background: The expansion of mobile health app or apps has created a growing need for structured and predictive tools to evaluate app quality before deployment. The Mobile App Rating Scale (MARS) offers a standardized, expert-driven assessment across 4 key dimensions-engagement, functionality, aesthetics, and information-but its use in forecasting user satisfaction through predictive modeling remains limited.
Objective: This study aimed to investigate how k-means clustering, combined with machine learning models, can predict user ratings for physical activity apps based on MARS dimensions, with the goal of forecasting ratings before production and uncovering insights into user satisfaction drivers.
Methods: We analyzed a dataset of 155 MARS-rated physical activity apps with user ratings. The dataset was split into training (n=111) and testing (n=44) subsets. K means clustering was applied to the training data, identifying 2 clusters. Exploratory data analysis included box plots, summary statistics, and component+residual plots to visualize linearity and distribution patterns across MARS dimensions. Correlation analysis was performed to quantify relationships between each MARS dimension and user ratings. In total, 5 machine learning models-generalized additive models, k-nearest neighbors, random forest, extreme gradient boosting, and support vector regression-were trained with and without clustering. Models were hypertuned and trained separately on each cluster, and the best-performing model for each cluster was selected. These predictions were combined to compute final performance metrics for the test set. Performance was evaluated using correct prediction percentage (0.5 range), mean absolute error, and R². Validation was performed on 2 additional datasets: mindfulness (n=85) and older adults (n=55) apps.
Results: Exploratory data analysis revealed that apps in cluster 1 were feature-rich and scored higher across all MARS dimensions, reflecting comprehensive and engagement-oriented designs. In contrast, cluster 2 comprised simpler, utilitarian apps focused on basic functionality. Component+residual plots showed nonlinear relationships, which became more interpretable within clusters. Correlation analysis indicated stronger associations between user ratings and engagement and functionality, but weaker or negative correlations with aesthetics and information, particularly in cluster 2. In the unclustered dataset, k nearest neighbors achieved 79.55% accuracy, mean absolute error=0.26, and R²=0.06. The combined support vector regression (cluster 1)+k-nearest neighbors (cluster 2) model achieved the highest performance: 88.64% accuracy, mean absolute error=0.27, and R²=0.04. Clustering improved prediction accuracy and enhanced alignment between predicted and actual user ratings. Models also generalized well to the external datasets.
Conclusions:

