Lung ultrasound among Expert operator'S: ScOring and iNter-rater reliability analysis (LESSON study) a secondary COWS study analysis from ITALUS group.

Enrico Boero, Luna Gargani, Annia Schreiber, Serena Rovida, Giampaolo Martinelli, Salvatore Maurizio Maggiore, Felice Urso, Anna Camporesi, Annarita Tullio, Fiorella Anna Lombardi, Gianmaria Cammarota, Daniele Guerino Biasucci, Elena Giovanna Bignami, Cristian Deana, Giovanni Volpicelli, Sergio Livigni, Luigi Vetrugno
{"title":"Lung ultrasound among Expert operator'S: ScOring and iNter-rater reliability analysis (LESSON study) a secondary COWS study analysis from ITALUS group.","authors":"Enrico Boero, Luna Gargani, Annia Schreiber, Serena Rovida, Giampaolo Martinelli, Salvatore Maurizio Maggiore, Felice Urso, Anna Camporesi, Annarita Tullio, Fiorella Anna Lombardi, Gianmaria Cammarota, Daniele Guerino Biasucci, Elena Giovanna Bignami, Cristian Deana, Giovanni Volpicelli, Sergio Livigni, Luigi Vetrugno","doi":"10.1186/s44158-024-00187-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Lung ultrasonography (LUS) is a non-invasive imaging method used to diagnose and monitor conditions such as pulmonary edema, pneumonia, and pneumothorax. It is precious where other imaging techniques like CT scan or chest X-rays are of limited access, especially in low- and middle-income countries with reduced resources. Furthermore, LUS reduces radiation exposure and its related blood cancer adverse events, which is particularly relevant in children and young subjects. The score obtained with LUS allows semi-quantification of regional loss of aeration, and it can provide a valuable and reliable assessment of the severity of most respiratory diseases. However, inter-observer reliability of the score has never been systematically assessed. This study aims to assess experienced LUS operators' agreement on a sample of video clips showing predefined findings.</p><p><strong>Methods: </strong>Twenty-five anonymized video clips comprehensively depicting the different values of LUS score were shown to renowned LUS experts blinded to patients' clinical data and the study's aims using an online form. Clips were acquired from five different ultrasound machines. Fleiss-Cohen weighted kappa was used to evaluate experts' agreement.</p><p><strong>Results: </strong>Over a period of 3 months, 20 experienced operators completed the assessment. Most worked in the ICU (10), ED (6), HDU (2), cardiology ward (1), or obstetric/gynecology department (1). The proportional LUS score mean was 15.3 (SD 1.6). Inter-rater agreement varied: 6 clips had full agreement, 3 had 19 out of 20 raters agreeing, and 3 had 18 agreeing, while the remaining 13 had 17 or fewer people agreeing on the assigned score. Scores 0 and score 3 were more reproducible than scores 1 and 2. Fleiss' Kappa for overall answers was 0.87 (95% CI 0.815-0.931, p < 0.001).</p><p><strong>Conclusions: </strong>The inter-rater agreement between experienced LUS operators is very high, although not perfect. The strong agreement and the small variance enable us to say that a 20% tolerance around a measured value of a LUS score is a reliable estimate of the patient's true LUS score, resulting in reduced variability in score interpretation and greater confidence in its clinical use.</p>","PeriodicalId":73597,"journal":{"name":"Journal of Anesthesia, Analgesia and Critical Care (Online)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11293153/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Anesthesia, Analgesia and Critical Care (Online)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s44158-024-00187-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Lung ultrasonography (LUS) is a non-invasive imaging method used to diagnose and monitor conditions such as pulmonary edema, pneumonia, and pneumothorax. It is precious where other imaging techniques like CT scan or chest X-rays are of limited access, especially in low- and middle-income countries with reduced resources. Furthermore, LUS reduces radiation exposure and its related blood cancer adverse events, which is particularly relevant in children and young subjects. The score obtained with LUS allows semi-quantification of regional loss of aeration, and it can provide a valuable and reliable assessment of the severity of most respiratory diseases. However, inter-observer reliability of the score has never been systematically assessed. This study aims to assess experienced LUS operators' agreement on a sample of video clips showing predefined findings.

Methods: Twenty-five anonymized video clips comprehensively depicting the different values of LUS score were shown to renowned LUS experts blinded to patients' clinical data and the study's aims using an online form. Clips were acquired from five different ultrasound machines. Fleiss-Cohen weighted kappa was used to evaluate experts' agreement.

Results: Over a period of 3 months, 20 experienced operators completed the assessment. Most worked in the ICU (10), ED (6), HDU (2), cardiology ward (1), or obstetric/gynecology department (1). The proportional LUS score mean was 15.3 (SD 1.6). Inter-rater agreement varied: 6 clips had full agreement, 3 had 19 out of 20 raters agreeing, and 3 had 18 agreeing, while the remaining 13 had 17 or fewer people agreeing on the assigned score. Scores 0 and score 3 were more reproducible than scores 1 and 2. Fleiss' Kappa for overall answers was 0.87 (95% CI 0.815-0.931, p < 0.001).

Conclusions: The inter-rater agreement between experienced LUS operators is very high, although not perfect. The strong agreement and the small variance enable us to say that a 20% tolerance around a measured value of a LUS score is a reliable estimate of the patient's true LUS score, resulting in reduced variability in score interpretation and greater confidence in its clinical use.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
专家操作者的肺部超声检查:专家操作者的肺部超声检查:评分和评分者间可靠性分析(LESSON 研究)是 ITALUS 小组的一项二次 COWS 研究分析。
背景:肺部超声造影(LUS)是一种无创成像方法,用于诊断和监测肺水肿、肺炎和气胸等疾病。在 CT 扫描或胸部 X 射线等其他成像技术使用受限的情况下,尤其是在资源匮乏的中低收入国家,肺部超声造影显得弥足珍贵。此外,LUS 还能减少辐射照射及其相关的血癌不良反应,这一点对儿童和青少年尤为重要。通过 LUS 获得的评分可对区域通气损失进行半量化,并可对大多数呼吸系统疾病的严重程度进行有价值的可靠评估。然而,该评分在观察者之间的可靠性还从未得到过系统的评估。本研究旨在评估经验丰富的 LUS 操作员在显示预定义结果的视频片段样本上的一致性:方法:使用在线表格向知名 LUS 专家展示 25 个匿名视频片段,这些片段全面描述了 LUS 评分的不同值,这些专家对患者的临床数据和本研究的目的视而不见。视频片段来自五台不同的超声波机。采用弗莱斯-科恩加权卡帕法评估专家的一致性:在 3 个月的时间里,20 位经验丰富的操作员完成了评估。他们大多在重症监护室(10 人)、急诊室(6 人)、重症监护室(2 人)、心脏科病房(1 人)或妇产科(1 人)工作。LUS 评分的平均比例为 15.3(标清 1.6)。评分者之间的一致性各不相同:6 个片段完全一致,3 个片段的 20 个评分者中有 19 个评分者一致,3 个片段有 18 个评分者一致,而其余 13 个片段的评分者中有 17 个或更少的人一致。与 1 分和 2 分相比,0 分和 3 分的可重复性更高。总体答案的弗莱斯卡帕值为 0.87(95% CI 0.815-0.931,P 结论:0.87(95% CI 0.815-0.931,P 结论:0.815-0.931):经验丰富的 LUS 操作员之间的评分者间一致性非常高,尽管并不完美。高度的一致性和较小的方差使我们可以说,LUS 评分测量值周围 20% 的容差是对患者真实 LUS 评分的可靠估计,从而减少了评分解释的变异性,提高了临床应用的信心。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.40
自引率
0.00%
发文量
0
期刊最新文献
Opioid system and related ligands: from the past to future perspectives. Abstracts of the ICARE 2024 78th SIAARTI National Congress. Lung ultrasound assessment of atelectasis following different anesthesia induction techniques in pediatric patients: a propensity score-matched, observational study. Management of analgosedation during noninvasive respiratory support: an expert Delphi consensus document developed by the Italian Society of Anesthesia, Analgesia, Resuscitation and Intensive Care (SIAARTI). Neuropathic pain, antidepressant drugs, and inflammation: a narrative review.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1