Ioana Duta, Symon M Kariuki, Anthony K Ngugi, Angelina Kakooza Mwesige, Honorati Masanja, Daniel M Mwanga, Seth Owusu-Agyei, Ryan Wagner, J Helen Cross, Josemir W Sander, Charles R Newton, Arjune Sen, Gabriel Davis Jones
{"title":"Evaluating the generalisability of region-naïve machine learning algorithms for the identification of epilepsy in low-resource settings.","authors":"Ioana Duta, Symon M Kariuki, Anthony K Ngugi, Angelina Kakooza Mwesige, Honorati Masanja, Daniel M Mwanga, Seth Owusu-Agyei, Ryan Wagner, J Helen Cross, Josemir W Sander, Charles R Newton, Arjune Sen, Gabriel Davis Jones","doi":"10.1371/journal.pdig.0000491","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Approximately 80% of people with epilepsy live in low- and middle-income countries (LMICs), where limited resources and stigma hinder accurate diagnosis and treatment. Clinical machine learning models have demonstrated substantial promise in supporting the diagnostic process in LMICs by aiding in preliminary screening and detection of possible epilepsy cases without relying on specialised or trained personnel. How well these models generalise to naïve regions is, however, underexplored. Here, we use a novel approach to assess the suitability and applicability of such clinical tools to aid screening and diagnosis of active convulsive epilepsy in settings beyond their original training contexts.</p><p><strong>Methods: </strong>We sourced data from the Study of Epidemiology of Epilepsy in Demographic Sites dataset, which includes demographic information and clinical variables related to diagnosing epilepsy across five sub-Saharan African sites. For each site, we developed a region-specific (single-site) predictive model for epilepsy and assessed its performance at other sites. We then iteratively added sites to a multi-site model and evaluated model performance on the omitted regions. Model performances and parameters were then compared across every permutation of sites. We used a leave-one-site-out cross-validation analysis to assess the impact of incorporating individual site data in the model.</p><p><strong>Results: </strong>Single-site clinical models performed well within their own regions, but generally worse when evaluated in other regions (p<0.05). Model weights and optimal thresholds varied markedly across sites. When the models were trained using data from an increasing number of sites, mean internal performance decreased while external performance improved.</p><p><strong>Conclusions: </strong>Clinical models for epilepsy diagnosis in LMICs demonstrate characteristic traits of ML models, such as limited generalisability and a trade-off between internal and external performance. The relationship between predictors and model outcomes also varies across sites, suggesting the need to update specific model aspects with local data before broader implementation. Variations are likely to be particular to the cultural context of diagnosis. We recommend developing models adapted to the cultures and contexts of their intended deployment and caution against deploying region- and culture-naïve models without thorough prior evaluation.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"4 2","pages":"e0000491"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000491","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: Approximately 80% of people with epilepsy live in low- and middle-income countries (LMICs), where limited resources and stigma hinder accurate diagnosis and treatment. Clinical machine learning models have demonstrated substantial promise in supporting the diagnostic process in LMICs by aiding in preliminary screening and detection of possible epilepsy cases without relying on specialised or trained personnel. How well these models generalise to naïve regions is, however, underexplored. Here, we use a novel approach to assess the suitability and applicability of such clinical tools to aid screening and diagnosis of active convulsive epilepsy in settings beyond their original training contexts.
Methods: We sourced data from the Study of Epidemiology of Epilepsy in Demographic Sites dataset, which includes demographic information and clinical variables related to diagnosing epilepsy across five sub-Saharan African sites. For each site, we developed a region-specific (single-site) predictive model for epilepsy and assessed its performance at other sites. We then iteratively added sites to a multi-site model and evaluated model performance on the omitted regions. Model performances and parameters were then compared across every permutation of sites. We used a leave-one-site-out cross-validation analysis to assess the impact of incorporating individual site data in the model.
Results: Single-site clinical models performed well within their own regions, but generally worse when evaluated in other regions (p<0.05). Model weights and optimal thresholds varied markedly across sites. When the models were trained using data from an increasing number of sites, mean internal performance decreased while external performance improved.
Conclusions: Clinical models for epilepsy diagnosis in LMICs demonstrate characteristic traits of ML models, such as limited generalisability and a trade-off between internal and external performance. The relationship between predictors and model outcomes also varies across sites, suggesting the need to update specific model aspects with local data before broader implementation. Variations are likely to be particular to the cultural context of diagnosis. We recommend developing models adapted to the cultures and contexts of their intended deployment and caution against deploying region- and culture-naïve models without thorough prior evaluation.