Basic emotion detection accuracy using artificial intelligence approaches in facial emotions recognition system: A systematic review

IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Applied Soft Computing Pub Date : 2025-03-01 Epub Date: 2025-02-11 DOI:10.1016/j.asoc.2025.112867
Chia-Feng Hsu , Sriyani Padmalatha Konara Mudiyanselage , Rismia Agustina , Mei-Feng Lin
{"title":"Basic emotion detection accuracy using artificial intelligence approaches in facial emotions recognition system: A systematic review","authors":"Chia-Feng Hsu ,&nbsp;Sriyani Padmalatha Konara Mudiyanselage ,&nbsp;Rismia Agustina ,&nbsp;Mei-Feng Lin","doi":"10.1016/j.asoc.2025.112867","DOIUrl":null,"url":null,"abstract":"<div><div>Facial emotion recognition (FER) systems are pivotal in advancing human communication by interpreting emotions such as happiness, sadness, anger, fear, surprise, and disgust through artificial intelligence (AI). This systematic review examines the accuracy of detecting basic emotions, evaluates the features, algorithms, and datasets used in FER systems, and proposes a taxonomy for their integration into healthcare. A comprehensive search of six databases, covering publications from January 1990 to March 2023, identified 4073 articles, with 35 studies meeting inclusion criteria.</div><div>The review revealed that happiness and surprise achieved the highest mean detection accuracies (96.42 % and 96.32 %, respectively), whereas anger and disgust exhibited lower accuracies (91.68 % and 93.71 %, respectively). Fear and sadness had a mean accuracy of 93.87 %. Among AI algorithms, GFFNN demonstrated the highest accuracy (100 %), followed by KNN (97.99 %) and DDBNN (97.77 %). CNN and SVM were the most commonly used algorithms, showing competitive accuracies. The CK+ dataset, while extensively employed, demonstrated a mean accuracy of 96.08 %, lower than RAVDESS, Oulu-CASIA, and other databases.</div><div>This taxonomy provides insights into FER systems' capabilities to enhance patient care by identifying emotional states, pain levels, and overall well-being. Future research should adopt diverse datasets and advanced algorithms to improve FER accuracy, enabling robust integration of these systems into healthcare practices.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112867"},"PeriodicalIF":6.6000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625001784","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/11 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Facial emotion recognition (FER) systems are pivotal in advancing human communication by interpreting emotions such as happiness, sadness, anger, fear, surprise, and disgust through artificial intelligence (AI). This systematic review examines the accuracy of detecting basic emotions, evaluates the features, algorithms, and datasets used in FER systems, and proposes a taxonomy for their integration into healthcare. A comprehensive search of six databases, covering publications from January 1990 to March 2023, identified 4073 articles, with 35 studies meeting inclusion criteria.
The review revealed that happiness and surprise achieved the highest mean detection accuracies (96.42 % and 96.32 %, respectively), whereas anger and disgust exhibited lower accuracies (91.68 % and 93.71 %, respectively). Fear and sadness had a mean accuracy of 93.87 %. Among AI algorithms, GFFNN demonstrated the highest accuracy (100 %), followed by KNN (97.99 %) and DDBNN (97.77 %). CNN and SVM were the most commonly used algorithms, showing competitive accuracies. The CK+ dataset, while extensively employed, demonstrated a mean accuracy of 96.08 %, lower than RAVDESS, Oulu-CASIA, and other databases.
This taxonomy provides insights into FER systems' capabilities to enhance patient care by identifying emotional states, pain levels, and overall well-being. Future research should adopt diverse datasets and advanced algorithms to improve FER accuracy, enabling robust integration of these systems into healthcare practices.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于人工智能方法的面部情绪识别系统的基本情绪检测精度综述
面部情绪识别(FER)系统通过人工智能(AI)解读快乐、悲伤、愤怒、恐惧、惊讶、厌恶等情绪,是促进人类交流的关键。本系统综述检查了检测基本情绪的准确性,评估了FER系统中使用的特征、算法和数据集,并提出了将其整合到医疗保健中的分类法。对六个数据库进行全面检索,涵盖1990年1月至2023年3月的出版物,确定了4073篇文章,其中35项研究符合纳入标准。结果显示,快乐和惊讶的平均检测准确率最高(分别为96.42 %和96.32 %),而愤怒和厌恶的平均检测准确率较低(分别为91.68 %和93.71 %)。恐惧和悲伤的平均准确率为93.87 %。在人工智能算法中,GFFNN的准确率最高(100 %),其次是KNN(97.99 %)和DDBNN(97.77 %)。CNN和SVM是最常用的算法,它们的准确率相当。虽然广泛使用了CK+ 数据集,但其平均准确率为96.08% %,低于RAVDESS、Oulu-CASIA和其他数据库。这种分类法通过识别情绪状态、疼痛程度和整体健康状况,为FER系统增强患者护理的能力提供了见解。未来的研究应该采用不同的数据集和先进的算法来提高FER的准确性,使这些系统能够强大地集成到医疗保健实践中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Applied Soft Computing
Applied Soft Computing 工程技术-计算机:跨学科应用
CiteScore
15.80
自引率
6.90%
发文量
874
审稿时长
10.9 months
期刊介绍: Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities. Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.
期刊最新文献
Few-shot classification via mix-pooling module and class-wise relationship graph similarity preserving Boosting feature distillation for link prediction via graph neural architecture search Anomaly detection of UAV flight data based on spatiotemporal feature fusion A mechanism-guided multimodal industrial copilot with incremental learning for natural products manufacturing H-Net: A transformer based multistage fusion convolutional neural network for underwater object detection in complex environments
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1