Machine learning model of facial expression outperforms models using analgesia nociception index and vital signs to predict postoperative pain intensity: a pilot study.
Insun Park, Jae Hyon Park, Jongjin Yoon, Hyo-Seok Na, Ah-Young Oh, Junghee Ryu, Bon-Wook Koo
{"title":"Machine learning model of facial expression outperforms models using analgesia nociception index and vital signs to predict postoperative pain intensity: a pilot study.","authors":"Insun Park, Jae Hyon Park, Jongjin Yoon, Hyo-Seok Na, Ah-Young Oh, Junghee Ryu, Bon-Wook Koo","doi":"10.4097/kja.23583","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI), and vital signs were developed to predict postoperative pain intensity, and their performances for predicting severe postoperative pain were compared.</p><p><strong>Methods: </strong>In total, 155 facial expressions from patients who underwent gastrectomy were recorded postoperatively; one blinded anesthesiologist simultaneously recorded the ANI score, vital signs, and patient self-assessed pain intensity based on the 11-point numerical rating scale (NRS). The ML models' area under the receiver operating characteristic curves (AUROCs) were calculated and compared using DeLong's test.</p><p><strong>Results: </strong>ML models were constructed using facial expressions, ANI, vital signs, and different combinations of the three datasets. The ML model constructed using facial expressions best predicted an NRS ≥ 7 (AUROC 0.93) followed by the ML model combining facial expressions and vital signs (AUROC 0.84) in the test-set. ML models constructed using combined physiological signals (vital signs, ANI) performed better than models based on individual parameters for predicting NRS ≥ 7, although the AUROCs were inferior to those of the ML model based on facial expressions (all P < 0.050). Among these parameters, absolute and relative ANI had the worst AUROCs (0.69 and 0.68, respectively) for predicting NRS ≥ 7.</p><p><strong>Conclusions: </strong>The ML model constructed using facial expressions best predicted severe postoperative pain (NRS ≥ 7) and outperformed models constructed from physiological signals.</p>","PeriodicalId":17855,"journal":{"name":"Korean Journal of Anesthesiology","volume":" ","pages":"195-204"},"PeriodicalIF":4.2000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10982524/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Korean Journal of Anesthesiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.4097/kja.23583","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/5 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ANESTHESIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Few studies have evaluated the use of automated artificial intelligence (AI)-based pain recognition in postoperative settings or the correlation with pain intensity. In this study, various machine learning (ML)-based models using facial expressions, the analgesia nociception index (ANI), and vital signs were developed to predict postoperative pain intensity, and their performances for predicting severe postoperative pain were compared.
Methods: In total, 155 facial expressions from patients who underwent gastrectomy were recorded postoperatively; one blinded anesthesiologist simultaneously recorded the ANI score, vital signs, and patient self-assessed pain intensity based on the 11-point numerical rating scale (NRS). The ML models' area under the receiver operating characteristic curves (AUROCs) were calculated and compared using DeLong's test.
Results: ML models were constructed using facial expressions, ANI, vital signs, and different combinations of the three datasets. The ML model constructed using facial expressions best predicted an NRS ≥ 7 (AUROC 0.93) followed by the ML model combining facial expressions and vital signs (AUROC 0.84) in the test-set. ML models constructed using combined physiological signals (vital signs, ANI) performed better than models based on individual parameters for predicting NRS ≥ 7, although the AUROCs were inferior to those of the ML model based on facial expressions (all P < 0.050). Among these parameters, absolute and relative ANI had the worst AUROCs (0.69 and 0.68, respectively) for predicting NRS ≥ 7.
Conclusions: The ML model constructed using facial expressions best predicted severe postoperative pain (NRS ≥ 7) and outperformed models constructed from physiological signals.
背景:很少有研究对基于人工智能(AI)的自动疼痛识别在术后环境中的应用或与疼痛强度的相关性进行评估。本研究利用面部表情、镇痛痛觉指数(ANI)和生命体征开发了多种基于机器学习(ML)的模型来预测术后疼痛强度,并比较了它们在预测严重术后疼痛方面的表现:方法:共记录了155名胃切除术患者的术后面部表情,由一名盲法麻醉师同时记录ANI评分、生命体征和患者根据11点数字评分量表(NRS)自我评估的疼痛强度。使用 DeLong 检验计算并比较了 ML 模型的接收者操作特征曲线下面积(AUROCs):使用面部表情、ANI、生命体征和三个数据集的不同组合构建了 ML 模型。在测试集中,使用面部表情构建的 ML 模型对 NRS ≥ 7 的预测效果最好(AUROC 0.93),其次是结合面部表情和生命体征的 ML 模型(AUROC 0.84)。使用综合生理信号(生命体征、ANI)构建的 ML 模型在预测 NRS ≥ 7 方面的表现优于基于单个参数的模型,尽管 AUROC 不如基于面部表情的 ML 模型(所有 P <0.050)。在这些参数中,绝对和相对 ANI 预测 NRS ≥ 7 的 AUROC 最差(分别为 0.69 和 0.68):结论:使用面部表情构建的 ML 模型对严重术后疼痛(NRS ≥ 7)的预测效果最佳,优于使用生理信号构建的模型。