Michael Yang, Abd-Allah El-Attar, Theodora Chaspari
{"title":"Deconstructing demographic bias in speech-based machine learning models for digital health","authors":"Michael Yang, Abd-Allah El-Attar, Theodora Chaspari","doi":"10.3389/fdgth.2024.1351637","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) algorithms have been heralded as promising solutions to the realization of assistive systems in digital healthcare, due to their ability to detect fine-grain patterns that are not easily perceived by humans. Yet, ML algorithms have also been critiqued for treating individuals differently based on their demography, thus propagating existing disparities. This paper explores gender and race bias in speech-based ML algorithms that detect behavioral and mental health outcomes.This paper examines potential sources of bias in the data used to train the ML, encompassing acoustic features extracted from speech signals and associated labels, as well as in the ML decisions. The paper further examines approaches to reduce existing bias via using the features that are the least informative of one’s demographic information as the ML input, and transforming the feature space in an adversarial manner to diminish the evidence of the demographic information while retaining information about the focal behavioral and mental health state.Results are presented in two domains, the first pertaining to gender and race bias when estimating levels of anxiety, and the second pertaining to gender bias in detecting depression. Findings indicate the presence of statistically significant differences in both acoustic features and labels among demographic groups, as well as differential ML performance among groups. The statistically significant differences present in the label space are partially preserved in the ML decisions. Although variations in ML performance across demographic groups were noted, results are mixed regarding the models’ ability to accurately estimate healthcare outcomes for the sensitive groups.These findings underscore the necessity for careful and thoughtful design in developing ML models that are capable of maintaining crucial aspects of the data and perform effectively across all populations in digital healthcare applications.","PeriodicalId":504480,"journal":{"name":"Frontiers in Digital Health","volume":"30 11","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Digital Health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2024.1351637","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning (ML) algorithms have been heralded as promising solutions to the realization of assistive systems in digital healthcare, due to their ability to detect fine-grain patterns that are not easily perceived by humans. Yet, ML algorithms have also been critiqued for treating individuals differently based on their demography, thus propagating existing disparities. This paper explores gender and race bias in speech-based ML algorithms that detect behavioral and mental health outcomes.This paper examines potential sources of bias in the data used to train the ML, encompassing acoustic features extracted from speech signals and associated labels, as well as in the ML decisions. The paper further examines approaches to reduce existing bias via using the features that are the least informative of one’s demographic information as the ML input, and transforming the feature space in an adversarial manner to diminish the evidence of the demographic information while retaining information about the focal behavioral and mental health state.Results are presented in two domains, the first pertaining to gender and race bias when estimating levels of anxiety, and the second pertaining to gender bias in detecting depression. Findings indicate the presence of statistically significant differences in both acoustic features and labels among demographic groups, as well as differential ML performance among groups. The statistically significant differences present in the label space are partially preserved in the ML decisions. Although variations in ML performance across demographic groups were noted, results are mixed regarding the models’ ability to accurately estimate healthcare outcomes for the sensitive groups.These findings underscore the necessity for careful and thoughtful design in developing ML models that are capable of maintaining crucial aspects of the data and perform effectively across all populations in digital healthcare applications.
机器学习(ML)算法能够检测到人类不易察觉的细粒度模式,因此被誉为实现数字医疗辅助系统的理想解决方案。然而,ML 算法也受到了批评,因为它们会根据人口统计学对个人进行区别对待,从而扩大现有的差距。本文探讨了用于检测行为和心理健康结果的基于语音的人工智能算法中的性别和种族偏见。本文研究了用于训练人工智能的数据中潜在的偏见来源,包括从语音信号中提取的声学特征和相关标签,以及人工智能决策中的偏见。论文进一步研究了减少现有偏差的方法,即使用对个人人口信息信息量最小的特征作为 ML 输入,并以对抗的方式转换特征空间,以减少人口信息的证据,同时保留有关焦点行为和心理健康状态的信息。论文介绍了两个领域的结果,第一个领域涉及估计焦虑水平时的性别和种族偏差,第二个领域涉及检测抑郁时的性别偏差。研究结果表明,不同人口群体在声音特征和标签方面都存在显著的统计学差异,不同群体之间的 ML 性能也存在差异。标签空间中存在的统计意义上的显著差异在 ML 决策中得到了部分保留。这些发现突出表明,在开发 ML 模型时,有必要进行细致周到的设计,使其能够保持数据的关键方面,并在数字医疗应用中有效地跨越所有人群。