Could artificial intelligence replace fine-needle aspiration in endoscopic ultrasound?
S. Jiang, N. Parsa, M. Byrne
{"title":"Could artificial intelligence replace fine-needle aspiration in endoscopic ultrasound?","authors":"S. Jiang, N. Parsa, M. Byrne","doi":"10.21037/gist-22-11","DOIUrl":null,"url":null,"abstract":"© Gastrointestinal Stromal Tumor. All rights reserved. Gastrointest Stromal Tumor 2022 | https://dx.doi.org/10.21037/gist-22-11 The recent application of artificial intelligence (AI) in the field of gastroenterology has shown promising results in the diagnosis and management of digestive diseases (1-3). Solutions such as AI-powered detection and diagnosis systems are now commercially available for colorectal polyps (4). The backbone of AI systems for image classification is the convolutional neural network (CNN), a deep-learning algorithm that conducts multi-level image analysis through pattern recognition, and improves its own diagnostic ability by training with large datasets (5,6). With the ability to integrate pixel-level data, CNN is able to aid endoscopists in the rapid interpretation of seemingly ambiguous visual data. One such area of diagnostic dilemma is the evaluation of gastric subepithelial lesions (SELs). While endoscopic ultrasound (EUS) is the most accurate imaging modality, there are no definitive EUS features to differentiate gastrointestinal stromal tumors (GISTs) from the commonly encountered gastrointestinal leiomyomas (GILs) (7-9). Misdiagnosis of GISTs and GILs are thought to comprise the majority of incorrect EUS diagnoses (9). Given the malignant potential of GISTs, it is crucial to accurately diagnose these lesions and to differentiate them from GILs, which are benign. The current standard is to differentiate these two by obtaining tissue samples with fine-needle aspiration or biopsy (EUS-FNA/B). However, FNA/B is invasive and is reported to have a lower diagnostic rate for SELs smaller than 20 mm (9,10). In this issue of Endoscopy, Yang et al. report the result of their AI-powered EUS model for differentiation between GISTs and GILs (11). Using a CNN for image recognition, the AI model was trained, validated, and evaluated on a total of 10,439 EUS images from 752 patients with histologically confirmed GISTs and GILs from four endoscopic centers, collected in aggregate from 2013 to 2020. They report a significantly higher diagnostic accuracy with the AI model compared with the expert endo-sonographer (94.0% vs. 70.2%, P value <0.001). More importantly, in the prospective evaluation of 508 consecutive patients with SELs, of whom 132 underwent histologic confirmation, the diagnostic accuracy remained significantly higher with the AI-powered EUS compared with the expert endosonographer (78.8% vs. 69.7%, P value =0.01). When examining only cases of histologically-confirmed GISTS or GILs, AI-joint diagnosis also had significantly higher accuracy, specificity, and PPV at 92.2%, 95.1%, and 94.1%, respectively, compared to individual diagnosis alone at 76.6% (P value =0.01), 65.9% (P value =0.002), and 69.6% (P value <0.01), respectively. The sensitivity and NPV of AI-joint diagnosis were similar to individual diagnosis. These are promising results in applying deep learning to real-time EUS to distinguish between GISTs and GILs. Previous studies have reported an improved diagnostic Editorial Commentary","PeriodicalId":93755,"journal":{"name":"Gastrointestinal stromal tumor","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gastrointestinal stromal tumor","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/gist-22-11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
© Gastrointestinal Stromal Tumor. All rights reserved. Gastrointest Stromal Tumor 2022 | https://dx.doi.org/10.21037/gist-22-11 The recent application of artificial intelligence (AI) in the field of gastroenterology has shown promising results in the diagnosis and management of digestive diseases (1-3). Solutions such as AI-powered detection and diagnosis systems are now commercially available for colorectal polyps (4). The backbone of AI systems for image classification is the convolutional neural network (CNN), a deep-learning algorithm that conducts multi-level image analysis through pattern recognition, and improves its own diagnostic ability by training with large datasets (5,6). With the ability to integrate pixel-level data, CNN is able to aid endoscopists in the rapid interpretation of seemingly ambiguous visual data. One such area of diagnostic dilemma is the evaluation of gastric subepithelial lesions (SELs). While endoscopic ultrasound (EUS) is the most accurate imaging modality, there are no definitive EUS features to differentiate gastrointestinal stromal tumors (GISTs) from the commonly encountered gastrointestinal leiomyomas (GILs) (7-9). Misdiagnosis of GISTs and GILs are thought to comprise the majority of incorrect EUS diagnoses (9). Given the malignant potential of GISTs, it is crucial to accurately diagnose these lesions and to differentiate them from GILs, which are benign. The current standard is to differentiate these two by obtaining tissue samples with fine-needle aspiration or biopsy (EUS-FNA/B). However, FNA/B is invasive and is reported to have a lower diagnostic rate for SELs smaller than 20 mm (9,10). In this issue of Endoscopy, Yang et al. report the result of their AI-powered EUS model for differentiation between GISTs and GILs (11). Using a CNN for image recognition, the AI model was trained, validated, and evaluated on a total of 10,439 EUS images from 752 patients with histologically confirmed GISTs and GILs from four endoscopic centers, collected in aggregate from 2013 to 2020. They report a significantly higher diagnostic accuracy with the AI model compared with the expert endo-sonographer (94.0% vs. 70.2%, P value <0.001). More importantly, in the prospective evaluation of 508 consecutive patients with SELs, of whom 132 underwent histologic confirmation, the diagnostic accuracy remained significantly higher with the AI-powered EUS compared with the expert endosonographer (78.8% vs. 69.7%, P value =0.01). When examining only cases of histologically-confirmed GISTS or GILs, AI-joint diagnosis also had significantly higher accuracy, specificity, and PPV at 92.2%, 95.1%, and 94.1%, respectively, compared to individual diagnosis alone at 76.6% (P value =0.01), 65.9% (P value =0.002), and 69.6% (P value <0.01), respectively. The sensitivity and NPV of AI-joint diagnosis were similar to individual diagnosis. These are promising results in applying deep learning to real-time EUS to distinguish between GISTs and GILs. Previous studies have reported an improved diagnostic Editorial Commentary
人工智能能否取代内镜超声中的细针穿刺?
©胃肠道间质瘤。版权所有。近年来,人工智能(AI)在胃肠病学领域的应用在消化系统疾病的诊断和治疗方面显示出了良好的效果(1-3)。人工智能驱动的检测和诊断系统等解决方案现在已经商业化,用于结肠直肠息肉(4)。用于图像分类的人工智能系统的骨干是卷积神经网络(CNN),这是一种深度学习算法,通过模式识别进行多层次的图像分析,并通过大数据集的训练提高自身的诊断能力(5,6)。凭借整合像素级数据的能力,CNN能够帮助内窥镜医师快速解释看似模糊的视觉数据。一个这样的诊断困境的领域是评估胃上皮下病变(SELs)。虽然内镜超声(EUS)是最准确的成像方式,但没有明确的EUS特征来区分胃肠道间质瘤(gist)和常见的胃肠道平滑肌瘤(GILs)(7-9)。误诊胃肠道间质瘤和胃肠道间质瘤被认为是EUS错误诊断的主要原因(9)。鉴于胃肠道间质瘤的恶性潜能,准确诊断这些病变并将其与良性的胃肠道间质瘤区分开来至关重要。目前的标准是通过细针抽吸或活检获得组织样本(EUS-FNA/B)来区分这两种。然而,FNA/B是侵入性的,据报道,对于小于20 mm的SELs, FNA/B的诊断率较低(9,10)。在本期《内镜》杂志上,Yang等人报道了他们的人工智能EUS模型用于区分gist和GILs的结果(11)。使用CNN进行图像识别,AI模型在2013年至2020年期间收集的来自四个内镜中心的752例组织学证实的gist和GILs患者的总共10,439张EUS图像上进行了训练、验证和评估。他们报告说,与内窥镜专家相比,人工智能模型的诊断准确率明显更高(94.0%对70.2%,P值<0.001)。更重要的是,在508例连续的SELs患者的前瞻性评估中,其中132例进行了组织学确认,与专家超声相比,人工智能EUS的诊断准确率仍然显著高于专家超声(78.8%比69.7%,P值=0.01)。当仅检查组织学证实的gist或GILs病例时,人工智能联合诊断的准确性、特异性和PPV分别为92.2%、95.1%和94.1%,显著高于单独诊断的76.6% (P值=0.01)、65.9% (P值=0.002)和69.6% (P值<0.01)。人工智能关节诊断的敏感性和NPV与个体诊断相似。这些都是将深度学习应用于实时EUS以区分gist和gil的有希望的结果。先前的研究报告了一种改进的诊断方法
本文章由计算机程序翻译,如有差异,请以英文原文为准。