Employing the Artificial Intelligence Object Detection Tool YOLOv8 for Real-Time Pain Detection: A Feasibility Study.

IF 2.5 3区 医学 Q2 CLINICAL NEUROLOGY Journal of Pain Research Pub Date : 2024-11-09 eCollection Date: 2024-01-01 DOI:10.2147/JPR.S491574
Marco Cascella, Mohammed Naveed Shariff, Giuliano Lo Bianco, Federica Monaco, Francesca Gargano, Alessandro Simonini, Alfonso Maria Ponsiglione, Ornella Piazza
{"title":"Employing the Artificial Intelligence Object Detection Tool YOLOv8 for Real-Time Pain Detection: A Feasibility Study.","authors":"Marco Cascella, Mohammed Naveed Shariff, Giuliano Lo Bianco, Federica Monaco, Francesca Gargano, Alessandro Simonini, Alfonso Maria Ponsiglione, Ornella Piazza","doi":"10.2147/JPR.S491574","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Effective pain management is crucial for patient care, impacting comfort, recovery, and overall well-being. Traditional subjective pain assessment methods can be challenging, particularly in specific patient populations. This research explores an alternative approach using computer vision (CV) to detect pain through facial expressions.</p><p><strong>Methods: </strong>The study implements the YOLOv8 real-time object detection model to analyze facial expressions indicative of pain. Given four pain datasets, a dataset of pain-expressing faces was compiled, and each image was carefully labeled based on the presence of pain-associated Action Units (AUs). The labeling distinguished between two classes: pain and no pain. The pain category included specific AUs (AU4, AU6, AU7, AU9, AU10, and AU43) following the Prkachin and Solomon Pain Intensity (PSPI) scoring method. Images showing these AUs with a PSPI score above 2 were labeled as expressing pain. The manual labeling process utilized an open-source tool, makesense.ai, to ensure precise annotation. The dataset was then split into training and testing subsets, each containing a mix of pain and no-pain images. The YOLOv8 model underwent iterative training over 10 epochs. The model's performance was validated using precision, recall, and mean Average Precision (mAP) metrics, and F1 score.</p><p><strong>Results: </strong>When considering all classes collectively, our model attained a mAP of 0.893 at a threshold of 0.5. The precision for \"pain\" and \"nopain\" detection was 0.868 and 0.919, respectively. F1 scores for the classes \"pain\", \"nopain\", and \"all classes\" reached a peak value of 0.80. Finally, the model was tested on the Delaware dataset and in a real-world scenario.</p><p><strong>Discussion: </strong>Despite limitations, this study highlights the promise of using real-time computer vision models for pain detection, with potential applications in clinical settings. Future research will focus on evaluating the model's generalizability across diverse clinical scenarios and its integration into clinical workflows to improve patient care.</p>","PeriodicalId":16661,"journal":{"name":"Journal of Pain Research","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11559421/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pain Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2147/JPR.S491574","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Effective pain management is crucial for patient care, impacting comfort, recovery, and overall well-being. Traditional subjective pain assessment methods can be challenging, particularly in specific patient populations. This research explores an alternative approach using computer vision (CV) to detect pain through facial expressions.

Methods: The study implements the YOLOv8 real-time object detection model to analyze facial expressions indicative of pain. Given four pain datasets, a dataset of pain-expressing faces was compiled, and each image was carefully labeled based on the presence of pain-associated Action Units (AUs). The labeling distinguished between two classes: pain and no pain. The pain category included specific AUs (AU4, AU6, AU7, AU9, AU10, and AU43) following the Prkachin and Solomon Pain Intensity (PSPI) scoring method. Images showing these AUs with a PSPI score above 2 were labeled as expressing pain. The manual labeling process utilized an open-source tool, makesense.ai, to ensure precise annotation. The dataset was then split into training and testing subsets, each containing a mix of pain and no-pain images. The YOLOv8 model underwent iterative training over 10 epochs. The model's performance was validated using precision, recall, and mean Average Precision (mAP) metrics, and F1 score.

Results: When considering all classes collectively, our model attained a mAP of 0.893 at a threshold of 0.5. The precision for "pain" and "nopain" detection was 0.868 and 0.919, respectively. F1 scores for the classes "pain", "nopain", and "all classes" reached a peak value of 0.80. Finally, the model was tested on the Delaware dataset and in a real-world scenario.

Discussion: Despite limitations, this study highlights the promise of using real-time computer vision models for pain detection, with potential applications in clinical settings. Future research will focus on evaluating the model's generalizability across diverse clinical scenarios and its integration into clinical workflows to improve patient care.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用人工智能对象检测工具 YOLOv8 进行实时疼痛检测:可行性研究
介绍:有效的疼痛管理对病人护理至关重要,会影响病人的舒适度、康复和整体健康。传统的主观疼痛评估方法具有挑战性,尤其是在特定的患者群体中。本研究探索了一种利用计算机视觉(CV)通过面部表情检测疼痛的替代方法:方法:本研究采用 YOLOv8 实时对象检测模型来分析表明疼痛的面部表情。在四个疼痛数据集的基础上,编制了疼痛表情数据集,并根据是否存在与疼痛相关的动作单元(AUs)对每张图像进行了仔细标记。标记区分为两类:疼痛和无痛。根据普尔卡钦和所罗门疼痛强度(PSPI)评分法,疼痛类别包括特定的动作单元(AU4、AU6、AU7、AU9、AU10 和 AU43)。显示这些 AU 且 PSPI 得分超过 2 分的图像被标记为表示疼痛。手动标注过程使用了开源工具 makesense.ai,以确保标注的精确性。然后,数据集被分成训练子集和测试子集,每个子集都包含疼痛和无疼痛的混合图像。YOLOv8 模型经过 10 次迭代训练。使用精确度、召回率、平均精确度 (mAP) 指标和 F1 分数验证了模型的性能:当综合考虑所有类别时,我们的模型在阈值为 0.5 时的 mAP 达到了 0.893。疼痛 "和 "无疼痛 "检测的精确度分别为 0.868 和 0.919。疼痛"、"不痛 "和 "所有类别 "的 F1 分数达到了 0.80 的峰值。最后,该模型在特拉华州数据集和真实世界场景中进行了测试:尽管存在局限性,但本研究强调了使用实时计算机视觉模型进行疼痛检测的前景,以及在临床环境中的潜在应用。未来的研究将侧重于评估该模型在不同临床场景中的通用性,以及将其整合到临床工作流程中以改善患者护理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Pain Research
Journal of Pain Research CLINICAL NEUROLOGY-
CiteScore
4.50
自引率
3.70%
发文量
411
审稿时长
16 weeks
期刊介绍: Journal of Pain Research is an international, peer-reviewed, open access journal that welcomes laboratory and clinical findings in the fields of pain research and the prevention and management of pain. Original research, reviews, symposium reports, hypothesis formation and commentaries are all considered for publication. Additionally, the journal now welcomes the submission of pain-policy-related editorials and commentaries, particularly in regard to ethical, regulatory, forensic, and other legal issues in pain medicine, and to the education of pain practitioners and researchers.
期刊最新文献
Translation, Cultural Adaptation and Validation of the Medication Adherence Report Scale (MARS-5) in Nepalese Cancer Patients Experiencing Pain. Continuous Adductor Canal Block Compared to Epidural Anesthesia for Total Knee Arthroplasty. Perioperative Pain Observation of Hip Fracture Surgery Patients with Cheek Acupuncture. A Randomized Controlled Non-Inferiority Trial Evaluating Opioid-Free versus Opioid-Sparing Analgesia for Orbital Fracture Reconstruction Under General Anesthesia. Effective Dose of Epidural Hydromorphone for Analgesia Following Caesarean Section in Using Modified Dixon Sequential Method.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1