Timothy A Heintz, Anusha Badathala, Avery Wooten, Cassandra W Cu, Alfred Wallace, Benjamin Pham, Arthur W Wallace, Julien Cobert
{"title":"围手术期患者计算机视觉自动伤害感觉识别的初步开发与验证。","authors":"Timothy A Heintz, Anusha Badathala, Avery Wooten, Cassandra W Cu, Alfred Wallace, Benjamin Pham, Arthur W Wallace, Julien Cobert","doi":"10.1097/ALN.0000000000005370","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Effective pain recognition and treatment in perioperative environments reduce length of stay and decrease risk of delirium and chronic pain. We sought to develop and validate preliminary computer vision-based approaches for nociception detection in hospitalized patients.</p><p><strong>Methods: </strong>Prospective observational cohort study using red-green-blue camera detection of perioperative patients. Adults (≥18 years) admitted for surgical procedures to the San Francisco Veterans Affairs Medical Center (SFVAMC) were included across 2 study phases: (1) algorithm development phase and (2) internal validation phase. Continuous recordings occurred perioperatively across any postoperative setting. We inputted facial images into convolutional neural networks using a pretrained backbone, to detect (1) critical care pain observation tool (CPOT) and (2) numerical rating scale (NRS). Outcomes were binary pain/no-pain. We performed external validation for CPOT and NRS classification on data from University of Northern British Columbia-McMaster University (UNBC) and Delaware Pain Database. Perturbation models were used for explainability.</p><p><strong>Results: </strong>We included 130 patients for development, 77 patients for validation cohort and 25 patients from UNBC and 229 patients from Delaware datasets for external validation. Model area under the curve of the receiver operating characteristic for CPOT models were 0.71 (95% confidence interval [CI] 0.70, 0.74) on the development cohort, 0.91 (95% CI 0.90, 0.92) on the SFVAMC validation cohort, 0.91 (0.89, 0.93) on UNBC and 0.80 (95% CI 0.75, 0.85) on Delaware. NRS model had lower performance (AUC 0.58 [95% CI 0.55, 0.61]). Brier scores improved following calibration across multiple different techniques. Perturbation models for CPOT models revealed eyebrows, nose, lips, and foreheads were most important for model prediction.</p><p><strong>Conclusions: </strong>Automated nociception detection using computer vision alone is feasible but requires additional testing and validation given small datasets used. Future multicenter observational studies are required to better understand the potential for automated continuous assessments for nociception detection in hospitalized patients.</p>","PeriodicalId":7970,"journal":{"name":"Anesthesiology","volume":" ","pages":""},"PeriodicalIF":9.1000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Preliminary Development and Validation of Automated Nociception Recognition Using Computer Vision in Perioperative Patients.\",\"authors\":\"Timothy A Heintz, Anusha Badathala, Avery Wooten, Cassandra W Cu, Alfred Wallace, Benjamin Pham, Arthur W Wallace, Julien Cobert\",\"doi\":\"10.1097/ALN.0000000000005370\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Effective pain recognition and treatment in perioperative environments reduce length of stay and decrease risk of delirium and chronic pain. We sought to develop and validate preliminary computer vision-based approaches for nociception detection in hospitalized patients.</p><p><strong>Methods: </strong>Prospective observational cohort study using red-green-blue camera detection of perioperative patients. Adults (≥18 years) admitted for surgical procedures to the San Francisco Veterans Affairs Medical Center (SFVAMC) were included across 2 study phases: (1) algorithm development phase and (2) internal validation phase. Continuous recordings occurred perioperatively across any postoperative setting. We inputted facial images into convolutional neural networks using a pretrained backbone, to detect (1) critical care pain observation tool (CPOT) and (2) numerical rating scale (NRS). Outcomes were binary pain/no-pain. We performed external validation for CPOT and NRS classification on data from University of Northern British Columbia-McMaster University (UNBC) and Delaware Pain Database. Perturbation models were used for explainability.</p><p><strong>Results: </strong>We included 130 patients for development, 77 patients for validation cohort and 25 patients from UNBC and 229 patients from Delaware datasets for external validation. Model area under the curve of the receiver operating characteristic for CPOT models were 0.71 (95% confidence interval [CI] 0.70, 0.74) on the development cohort, 0.91 (95% CI 0.90, 0.92) on the SFVAMC validation cohort, 0.91 (0.89, 0.93) on UNBC and 0.80 (95% CI 0.75, 0.85) on Delaware. NRS model had lower performance (AUC 0.58 [95% CI 0.55, 0.61]). Brier scores improved following calibration across multiple different techniques. Perturbation models for CPOT models revealed eyebrows, nose, lips, and foreheads were most important for model prediction.</p><p><strong>Conclusions: </strong>Automated nociception detection using computer vision alone is feasible but requires additional testing and validation given small datasets used. Future multicenter observational studies are required to better understand the potential for automated continuous assessments for nociception detection in hospitalized patients.</p>\",\"PeriodicalId\":7970,\"journal\":{\"name\":\"Anesthesiology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":9.1000,\"publicationDate\":\"2025-01-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Anesthesiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/ALN.0000000000005370\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ANESTHESIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Anesthesiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/ALN.0000000000005370","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ANESTHESIOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
背景:围手术期有效的疼痛识别和治疗可以缩短住院时间,降低谵妄和慢性疼痛的风险。我们试图开发和验证初步的基于计算机视觉的方法,用于住院患者的伤害感觉检测。方法:采用红-绿-蓝相机检测围手术期患者的前瞻性观察队列研究。在旧金山退伍军人事务医疗中心(SFVAMC)接受外科手术的成年人(≥18岁)被纳入两个研究阶段:(1)算法开发阶段和(2)内部验证阶段。在任何术后情况下,围手术期均有连续记录。我们使用预训练的主干将面部图像输入卷积神经网络,以检测(1)重症监护疼痛观察工具(CPOT)和(2)数值评定量表(NRS)。结果为疼痛/无疼痛。我们对来自北不列颠哥伦比亚大学-麦克马斯特大学(UNBC)和特拉华疼痛数据库的数据进行了CPOT和NRS分类的外部验证。微扰模型用于解释。结果:我们纳入了130例患者用于开发,77例患者用于验证队列,25例患者来自UNBC, 229例患者来自Delaware数据集进行外部验证。CPOT模型的受试者工作特征曲线下模型面积在开发组为0.71(95%可信区间[CI] 0.70, 0.74),在SFVAMC验证组为0.91 (95% CI 0.90, 0.92),在UNBC组为0.91(0.89,0.93),在Delaware组为0.80 (95% CI 0.75, 0.85)。NRS模型的性能较低(AUC 0.58 [95% CI 0.55, 0.61])。经过多种不同技术的校准后,Brier分数有所提高。CPOT模型的扰动模型显示眉毛、鼻子、嘴唇和前额对模型预测最重要。结论:单独使用计算机视觉的自动伤害感觉检测是可行的,但需要额外的测试和验证,因为使用的数据集很小。未来的多中心观察性研究需要更好地了解在住院患者中进行伤害感觉检测的自动连续评估的潜力。
Preliminary Development and Validation of Automated Nociception Recognition Using Computer Vision in Perioperative Patients.
Background: Effective pain recognition and treatment in perioperative environments reduce length of stay and decrease risk of delirium and chronic pain. We sought to develop and validate preliminary computer vision-based approaches for nociception detection in hospitalized patients.
Methods: Prospective observational cohort study using red-green-blue camera detection of perioperative patients. Adults (≥18 years) admitted for surgical procedures to the San Francisco Veterans Affairs Medical Center (SFVAMC) were included across 2 study phases: (1) algorithm development phase and (2) internal validation phase. Continuous recordings occurred perioperatively across any postoperative setting. We inputted facial images into convolutional neural networks using a pretrained backbone, to detect (1) critical care pain observation tool (CPOT) and (2) numerical rating scale (NRS). Outcomes were binary pain/no-pain. We performed external validation for CPOT and NRS classification on data from University of Northern British Columbia-McMaster University (UNBC) and Delaware Pain Database. Perturbation models were used for explainability.
Results: We included 130 patients for development, 77 patients for validation cohort and 25 patients from UNBC and 229 patients from Delaware datasets for external validation. Model area under the curve of the receiver operating characteristic for CPOT models were 0.71 (95% confidence interval [CI] 0.70, 0.74) on the development cohort, 0.91 (95% CI 0.90, 0.92) on the SFVAMC validation cohort, 0.91 (0.89, 0.93) on UNBC and 0.80 (95% CI 0.75, 0.85) on Delaware. NRS model had lower performance (AUC 0.58 [95% CI 0.55, 0.61]). Brier scores improved following calibration across multiple different techniques. Perturbation models for CPOT models revealed eyebrows, nose, lips, and foreheads were most important for model prediction.
Conclusions: Automated nociception detection using computer vision alone is feasible but requires additional testing and validation given small datasets used. Future multicenter observational studies are required to better understand the potential for automated continuous assessments for nociception detection in hospitalized patients.
期刊介绍:
With its establishment in 1940, Anesthesiology has emerged as a prominent leader in the field of anesthesiology, encompassing perioperative, critical care, and pain medicine. As the esteemed journal of the American Society of Anesthesiologists, Anesthesiology operates independently with full editorial freedom. Its distinguished Editorial Board, comprising renowned professionals from across the globe, drives the advancement of the specialty by presenting innovative research through immediate open access to select articles and granting free access to all published articles after a six-month period. Furthermore, Anesthesiology actively promotes groundbreaking studies through an influential press release program. The journal's unwavering commitment lies in the dissemination of exemplary work that enhances clinical practice and revolutionizes the practice of medicine within our discipline.