Michael Owusu-Adjei, J. B. Hayfron-Acquah, F. Twum, Gaddafi Abdul-Salaam
{"title":"Machine Learning Modeling of Disease Treatment Default: A Comparative Analysis of Classification Models","authors":"Michael Owusu-Adjei, J. B. Hayfron-Acquah, F. Twum, Gaddafi Abdul-Salaam","doi":"10.1155/2023/4168770","DOIUrl":null,"url":null,"abstract":"Generally, treatment default of diseases by patients is regarded as the biggest threat to favourable disease treatment outcomes. It is seen as the reason for the resurgence of infectious diseases including tuberculosis in some developing countries. Sadly, its occurrence in chronic disease management is associated with high morbidity and mortality rates. Many reasons have been adduced for this phenomenon. Exploration of treatment default using biographic and behavioral metrics collected from patients and healthcare providers remains a challenge. The focus on contextual nonbiomedical measurements using a supervised machine learning modeling technique is aimed at creating an understanding of the reasons why treatment default occurs, including identifying important contextual parameters that contribute to treatment default. The predicted accuracy scores of four supervised machine learning algorithms, namely, gradient boosting, logistic regression, random forest, and support vector machine were 0.87, 0.90, 0.81, and 0.77, respectively. Additionally, performance indicators such as the positive predicted value score for the four models ranged between 98.72%–98.87%, and the negative predicted values of gradient boosting, logistic regression, random forest, and support vector machine were 50%, 75%, 22.22%, and 50%, respectively. Logistic regression appears to have the highest negative-predicted value score of 75%, with the smallest error margin of 25% and the highest accuracy score of 0.90, and the random forest had the lowest negative predicted value score of 22.22%, registering the highest error margin of 77.78%. By performing a chi-square correlation statistic test of variable independence, this study suggests that age, presence of comorbidities, concern for long queuing/waiting time at treatment facilities, availability of qualified clinicians, and the patient’s nutritional state whether on a controlled diet or not are likely to affect their adherence to disease treatment and could result in an increased risk of default.","PeriodicalId":30619,"journal":{"name":"Advances in Public Health","volume":"26 1","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Public Health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2023/4168770","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH","Score":null,"Total":0}
引用次数: 0
Abstract
Generally, treatment default of diseases by patients is regarded as the biggest threat to favourable disease treatment outcomes. It is seen as the reason for the resurgence of infectious diseases including tuberculosis in some developing countries. Sadly, its occurrence in chronic disease management is associated with high morbidity and mortality rates. Many reasons have been adduced for this phenomenon. Exploration of treatment default using biographic and behavioral metrics collected from patients and healthcare providers remains a challenge. The focus on contextual nonbiomedical measurements using a supervised machine learning modeling technique is aimed at creating an understanding of the reasons why treatment default occurs, including identifying important contextual parameters that contribute to treatment default. The predicted accuracy scores of four supervised machine learning algorithms, namely, gradient boosting, logistic regression, random forest, and support vector machine were 0.87, 0.90, 0.81, and 0.77, respectively. Additionally, performance indicators such as the positive predicted value score for the four models ranged between 98.72%–98.87%, and the negative predicted values of gradient boosting, logistic regression, random forest, and support vector machine were 50%, 75%, 22.22%, and 50%, respectively. Logistic regression appears to have the highest negative-predicted value score of 75%, with the smallest error margin of 25% and the highest accuracy score of 0.90, and the random forest had the lowest negative predicted value score of 22.22%, registering the highest error margin of 77.78%. By performing a chi-square correlation statistic test of variable independence, this study suggests that age, presence of comorbidities, concern for long queuing/waiting time at treatment facilities, availability of qualified clinicians, and the patient’s nutritional state whether on a controlled diet or not are likely to affect their adherence to disease treatment and could result in an increased risk of default.