Investigating fairness in machine learning-based audio sentiment analysis

Sophina Luitel, Yang Liu, Mohd Anwar
{"title":"Investigating fairness in machine learning-based audio sentiment analysis","authors":"Sophina Luitel,&nbsp;Yang Liu,&nbsp;Mohd Anwar","doi":"10.1007/s43681-024-00453-2","DOIUrl":null,"url":null,"abstract":"<div><p>Audio sentiment analysis is a growing area of research, however little attention has been paid to the fairness of machine learning models in this field. Whilst the current literature covers research on machine learning models’ reliability and fairness in various demographic groups, fairness in audio sentiment analysis with respect to gender is still an uninvestigated field. To fill this knowledge gap, we conducted experiments aimed at assessing the fairness of machine learning algorithms concerning gender within the context of audio sentiment analysis. In this research, we used 442 audio files of happiness and sadness—representing equal samples of male and female subjects—and generated spectrograms for each file. Then we performed feature extraction using bag-of-visual-words method followed by building classifiers using Random Forest, Support Vector Machines, and K-nearest Neighbors algorithms. We investigated whether the machine learning models for audio sentiment analysis are fair across female and male genders. We found the need for gender-specific models for audio sentiment analysis instead of a gender-agnostic-model. Our results provided three pieces of evidence to back up our claim that gender-specific models demonstrate bias in terms of overall accuracy equality when tested using audio samples representing the other gender, as well as combination of both genders. Furthermore, gender-agnostic-model performs poorly in comparison to gender-specific models in classifying sentiments of both male and female audio samples. These findings emphasize the importance of employing an appropriate gender-specific model for an audio sentiment analysis task to ensure fairness and accuracy. The best performance is achieved when using a female-model (78% accuracy) and a male-model (74% accuracy), significantly outperforming the 66% accuracy of the gender-agnostic model.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1099 - 1108"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00453-2.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00453-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Audio sentiment analysis is a growing area of research, however little attention has been paid to the fairness of machine learning models in this field. Whilst the current literature covers research on machine learning models’ reliability and fairness in various demographic groups, fairness in audio sentiment analysis with respect to gender is still an uninvestigated field. To fill this knowledge gap, we conducted experiments aimed at assessing the fairness of machine learning algorithms concerning gender within the context of audio sentiment analysis. In this research, we used 442 audio files of happiness and sadness—representing equal samples of male and female subjects—and generated spectrograms for each file. Then we performed feature extraction using bag-of-visual-words method followed by building classifiers using Random Forest, Support Vector Machines, and K-nearest Neighbors algorithms. We investigated whether the machine learning models for audio sentiment analysis are fair across female and male genders. We found the need for gender-specific models for audio sentiment analysis instead of a gender-agnostic-model. Our results provided three pieces of evidence to back up our claim that gender-specific models demonstrate bias in terms of overall accuracy equality when tested using audio samples representing the other gender, as well as combination of both genders. Furthermore, gender-agnostic-model performs poorly in comparison to gender-specific models in classifying sentiments of both male and female audio samples. These findings emphasize the importance of employing an appropriate gender-specific model for an audio sentiment analysis task to ensure fairness and accuracy. The best performance is achieved when using a female-model (78% accuracy) and a male-model (74% accuracy), significantly outperforming the 66% accuracy of the gender-agnostic model.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
调查基于机器学习的音频情感分析的公平性
音频情感分析是一个不断发展的研究领域,然而在这个领域,机器学习模型的公平性却很少受到关注。虽然目前的文献涵盖了机器学习模型在不同人口群体中的可靠性和公平性的研究,但音频情感分析中关于性别的公平性仍然是一个未研究的领域。为了填补这一知识空白,我们进行了旨在评估音频情感分析背景下关于性别的机器学习算法公平性的实验。在这项研究中,我们使用了442个关于快乐和悲伤的音频文件——代表了男性和女性受试者的相同样本——并为每个文件生成了频谱图。然后,我们使用视觉词袋方法进行特征提取,然后使用随机森林、支持向量机和k近邻算法构建分类器。我们调查了音频情感分析的机器学习模型是否在女性和男性之间都是公平的。我们发现音频情感分析需要特定性别的模型,而不是性别不可知论模型。我们的结果提供了三个证据来支持我们的说法,即当使用代表另一种性别的音频样本以及两种性别的组合进行测试时,性别特定模型在整体准确性平等方面表现出偏见。此外,性别不可知论模型在对男性和女性音频样本进行情感分类时,与性别特定模型相比表现较差。这些发现强调了在音频情感分析任务中采用适当的性别特定模型以确保公平性和准确性的重要性。当使用女性模型(78%准确率)和男性模型(74%准确率)时,达到了最佳性能,显著优于性别不可知模型的66%准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Beyond black-box medicine: a bioethical considerations for informed consent in AI-driven endoscopy Rectifying illusion: a Buddhist–Confucian framework for LLM hallucinations A dynamic contextual responsibility framework for evaluating large language models in socio-technical contexts Political fantasies of fairness: artificial intelligence, law, and the myth of sovereign reason A critical analysis of the ethical benefits and challenges related to the development and use of wearable AI devices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1