Derya Şahin, Lana Kambeitz-Ilankovic, Stephen Wood, Dominic Dwyer, Rachel Upthegrove, Raimo Salokangas, Stefan Borgwardt, Paolo Brambilla, Eva Meisenzahl, Stephan Ruhrmann, Frauke Schultze-Lutter, Rebekka Lencer, Alessandro Bertolino, Christos Pantelis, Nikolaos Koutsouleris, Joseph Kambeitz
{"title":"精确精神病学中的算法公平性:精神病临床高危个体的预测模型分析。","authors":"Derya Şahin, Lana Kambeitz-Ilankovic, Stephen Wood, Dominic Dwyer, Rachel Upthegrove, Raimo Salokangas, Stefan Borgwardt, Paolo Brambilla, Eva Meisenzahl, Stephan Ruhrmann, Frauke Schultze-Lutter, Rebekka Lencer, Alessandro Bertolino, Christos Pantelis, Nikolaos Koutsouleris, Joseph Kambeitz","doi":"10.1192/bjp.2023.141","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Computational models offer promising potential for personalised treatment of psychiatric diseases. For their clinical deployment, fairness must be evaluated alongside accuracy. Fairness requires predictive models to not unfairly disadvantage specific demographic groups. Failure to assess model fairness prior to use risks perpetuating healthcare inequalities. Despite its importance, empirical investigation of fairness in predictive models for psychiatry remains scarce.</p><p><strong>Aims: </strong>To evaluate fairness in prediction models for development of psychosis and functional outcome.</p><p><strong>Method: </strong>Using data from the PRONIA study, we examined fairness in 13 published models for prediction of transition to psychosis (<i>n</i> = 11) and functional outcome (<i>n</i> = 2) in people at clinical high risk for psychosis or with recent-onset depression. Using accuracy equality, predictive parity, false-positive error rate balance and false-negative error rate balance, we evaluated relevant fairness aspects for the demographic attributes 'gender' and 'educational attainment' and compared them with the fairness of clinicians' judgements.</p><p><strong>Results: </strong>Our findings indicate systematic bias towards assigning less favourable outcomes to individuals with lower educational attainment in both prediction models and clinicians' judgements, resulting in higher false-positive rates in 7 of 11 models for transition to psychosis. Interestingly, the bias patterns observed in algorithmic predictions were not significantly more pronounced than those in clinicians' predictions.</p><p><strong>Conclusions: </strong>Educational bias was present in algorithmic and clinicians' predictions, assuming more favourable outcomes for individuals with higher educational level (years of education). This bias might lead to increased stigma and psychosocial burden in patients with lower educational attainment and suboptimal psychosis prevention in those with higher educational attainment.</p>","PeriodicalId":9259,"journal":{"name":"British Journal of Psychiatry","volume":null,"pages":null},"PeriodicalIF":8.7000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Algorithmic fairness in precision psychiatry: analysis of prediction models in individuals at clinical high risk for psychosis.\",\"authors\":\"Derya Şahin, Lana Kambeitz-Ilankovic, Stephen Wood, Dominic Dwyer, Rachel Upthegrove, Raimo Salokangas, Stefan Borgwardt, Paolo Brambilla, Eva Meisenzahl, Stephan Ruhrmann, Frauke Schultze-Lutter, Rebekka Lencer, Alessandro Bertolino, Christos Pantelis, Nikolaos Koutsouleris, Joseph Kambeitz\",\"doi\":\"10.1192/bjp.2023.141\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Computational models offer promising potential for personalised treatment of psychiatric diseases. For their clinical deployment, fairness must be evaluated alongside accuracy. Fairness requires predictive models to not unfairly disadvantage specific demographic groups. Failure to assess model fairness prior to use risks perpetuating healthcare inequalities. Despite its importance, empirical investigation of fairness in predictive models for psychiatry remains scarce.</p><p><strong>Aims: </strong>To evaluate fairness in prediction models for development of psychosis and functional outcome.</p><p><strong>Method: </strong>Using data from the PRONIA study, we examined fairness in 13 published models for prediction of transition to psychosis (<i>n</i> = 11) and functional outcome (<i>n</i> = 2) in people at clinical high risk for psychosis or with recent-onset depression. Using accuracy equality, predictive parity, false-positive error rate balance and false-negative error rate balance, we evaluated relevant fairness aspects for the demographic attributes 'gender' and 'educational attainment' and compared them with the fairness of clinicians' judgements.</p><p><strong>Results: </strong>Our findings indicate systematic bias towards assigning less favourable outcomes to individuals with lower educational attainment in both prediction models and clinicians' judgements, resulting in higher false-positive rates in 7 of 11 models for transition to psychosis. Interestingly, the bias patterns observed in algorithmic predictions were not significantly more pronounced than those in clinicians' predictions.</p><p><strong>Conclusions: </strong>Educational bias was present in algorithmic and clinicians' predictions, assuming more favourable outcomes for individuals with higher educational level (years of education). This bias might lead to increased stigma and psychosocial burden in patients with lower educational attainment and suboptimal psychosis prevention in those with higher educational attainment.</p>\",\"PeriodicalId\":9259,\"journal\":{\"name\":\"British Journal of Psychiatry\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":8.7000,\"publicationDate\":\"2024-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Psychiatry\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1192/bjp.2023.141\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHIATRY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1192/bjp.2023.141","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHIATRY","Score":null,"Total":0}
Algorithmic fairness in precision psychiatry: analysis of prediction models in individuals at clinical high risk for psychosis.
Background: Computational models offer promising potential for personalised treatment of psychiatric diseases. For their clinical deployment, fairness must be evaluated alongside accuracy. Fairness requires predictive models to not unfairly disadvantage specific demographic groups. Failure to assess model fairness prior to use risks perpetuating healthcare inequalities. Despite its importance, empirical investigation of fairness in predictive models for psychiatry remains scarce.
Aims: To evaluate fairness in prediction models for development of psychosis and functional outcome.
Method: Using data from the PRONIA study, we examined fairness in 13 published models for prediction of transition to psychosis (n = 11) and functional outcome (n = 2) in people at clinical high risk for psychosis or with recent-onset depression. Using accuracy equality, predictive parity, false-positive error rate balance and false-negative error rate balance, we evaluated relevant fairness aspects for the demographic attributes 'gender' and 'educational attainment' and compared them with the fairness of clinicians' judgements.
Results: Our findings indicate systematic bias towards assigning less favourable outcomes to individuals with lower educational attainment in both prediction models and clinicians' judgements, resulting in higher false-positive rates in 7 of 11 models for transition to psychosis. Interestingly, the bias patterns observed in algorithmic predictions were not significantly more pronounced than those in clinicians' predictions.
Conclusions: Educational bias was present in algorithmic and clinicians' predictions, assuming more favourable outcomes for individuals with higher educational level (years of education). This bias might lead to increased stigma and psychosocial burden in patients with lower educational attainment and suboptimal psychosis prevention in those with higher educational attainment.
期刊介绍:
The British Journal of Psychiatry (BJPsych) is a renowned international journal that undergoes rigorous peer review. It covers various branches of psychiatry, with a specific focus on the clinical aspects of each topic. Published monthly by the Royal College of Psychiatrists, this journal is dedicated to enhancing the prevention, investigation, diagnosis, treatment, and care of mental illness worldwide. It also strives to promote global mental health. In addition to featuring authoritative original research articles from across the globe, the journal includes editorials, review articles, commentaries on contentious issues, a comprehensive book review section, and a dynamic correspondence column. BJPsych is an essential source of information for psychiatrists, clinical psychologists, and other professionals interested in mental health.