Joseph E Alderman MBChB , Joanne Palmer PhD , Elinor Laws MBBCh , Melissa D McCradden PhD , Johan Ordish MA , Marzyeh Ghassemi PhD , Stephen R Pfohl PhD , Negar Rostamzadeh PhD , Heather Cole-Lewis PhD , Prof Ben Glocker PhD , Prof Melanie Calvert PhD , Tom J Pollard PhD , Jaspret Gill MSc , Jacqui Gath MBCS , Adewale Adebajo MBE , Jude Beng BSc , Cassandra H Leung , Stephanie Kuku MD , Lesley-Anne Farmer BSc , Rubeta N Matin PhD , Xiaoxuan Liu PhD
{"title":"解决算法偏见和促进卫生数据集的透明度:团结一致的共识建议。","authors":"Joseph E Alderman MBChB , Joanne Palmer PhD , Elinor Laws MBBCh , Melissa D McCradden PhD , Johan Ordish MA , Marzyeh Ghassemi PhD , Stephen R Pfohl PhD , Negar Rostamzadeh PhD , Heather Cole-Lewis PhD , Prof Ben Glocker PhD , Prof Melanie Calvert PhD , Tom J Pollard PhD , Jaspret Gill MSc , Jacqui Gath MBCS , Adewale Adebajo MBE , Jude Beng BSc , Cassandra H Leung , Stephanie Kuku MD , Lesley-Anne Farmer BSc , Rubeta N Matin PhD , Xiaoxuan Liu PhD","doi":"10.1016/S2589-7500(24)00224-3","DOIUrl":null,"url":null,"abstract":"<div><div>Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups. Draft recommendation items were informed by a systematic review and stakeholder survey. The recommendations were developed using a Delphi approach, supplemented by a public consultation and international interview study. Overall, more than 350 representatives from 58 countries provided input into this initiative. 194 Delphi participants from 25 countries voted and provided comments on 32 candidate items across three electronic survey rounds and one in-person consensus meeting. The 29 STANDING Together consensus recommendations are presented here in two parts. Recommendations for Documentation of Health Datasets provide guidance for dataset curators to enable transparency around data composition and limitations. Recommendations for Use of Health Datasets aim to enable identification and mitigation of algorithmic biases that might exacerbate health inequalities. These recommendations are intended to prompt proactive inquiry rather than acting as a checklist. We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation. We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies which are safe and effective.</div></div>","PeriodicalId":48534,"journal":{"name":"Lancet Digital Health","volume":"7 1","pages":"Pages e64-e88"},"PeriodicalIF":23.8000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668905/pdf/","citationCount":"0","resultStr":"{\"title\":\"Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations\",\"authors\":\"Joseph E Alderman MBChB , Joanne Palmer PhD , Elinor Laws MBBCh , Melissa D McCradden PhD , Johan Ordish MA , Marzyeh Ghassemi PhD , Stephen R Pfohl PhD , Negar Rostamzadeh PhD , Heather Cole-Lewis PhD , Prof Ben Glocker PhD , Prof Melanie Calvert PhD , Tom J Pollard PhD , Jaspret Gill MSc , Jacqui Gath MBCS , Adewale Adebajo MBE , Jude Beng BSc , Cassandra H Leung , Stephanie Kuku MD , Lesley-Anne Farmer BSc , Rubeta N Matin PhD , Xiaoxuan Liu PhD\",\"doi\":\"10.1016/S2589-7500(24)00224-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups. Draft recommendation items were informed by a systematic review and stakeholder survey. The recommendations were developed using a Delphi approach, supplemented by a public consultation and international interview study. Overall, more than 350 representatives from 58 countries provided input into this initiative. 194 Delphi participants from 25 countries voted and provided comments on 32 candidate items across three electronic survey rounds and one in-person consensus meeting. The 29 STANDING Together consensus recommendations are presented here in two parts. Recommendations for Documentation of Health Datasets provide guidance for dataset curators to enable transparency around data composition and limitations. Recommendations for Use of Health Datasets aim to enable identification and mitigation of algorithmic biases that might exacerbate health inequalities. These recommendations are intended to prompt proactive inquiry rather than acting as a checklist. We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation. We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies which are safe and effective.</div></div>\",\"PeriodicalId\":48534,\"journal\":{\"name\":\"Lancet Digital Health\",\"volume\":\"7 1\",\"pages\":\"Pages e64-e88\"},\"PeriodicalIF\":23.8000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668905/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Lancet Digital Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2589750024002243\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lancet Digital Health","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589750024002243","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations
Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups. Draft recommendation items were informed by a systematic review and stakeholder survey. The recommendations were developed using a Delphi approach, supplemented by a public consultation and international interview study. Overall, more than 350 representatives from 58 countries provided input into this initiative. 194 Delphi participants from 25 countries voted and provided comments on 32 candidate items across three electronic survey rounds and one in-person consensus meeting. The 29 STANDING Together consensus recommendations are presented here in two parts. Recommendations for Documentation of Health Datasets provide guidance for dataset curators to enable transparency around data composition and limitations. Recommendations for Use of Health Datasets aim to enable identification and mitigation of algorithmic biases that might exacerbate health inequalities. These recommendations are intended to prompt proactive inquiry rather than acting as a checklist. We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation. We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies which are safe and effective.
期刊介绍:
The Lancet Digital Health publishes important, innovative, and practice-changing research on any topic connected with digital technology in clinical medicine, public health, and global health.
The journal’s open access content crosses subject boundaries, building bridges between health professionals and researchers.By bringing together the most important advances in this multidisciplinary field,The Lancet Digital Health is the most prominent publishing venue in digital health.
We publish a range of content types including Articles,Review, Comment, and Correspondence, contributing to promoting digital technologies in health practice worldwide.