{"title":"人工智能能否取代内镜超声中的细针穿刺?","authors":"S. Jiang, N. Parsa, M. Byrne","doi":"10.21037/gist-22-11","DOIUrl":null,"url":null,"abstract":"© Gastrointestinal Stromal Tumor. All rights reserved. Gastrointest Stromal Tumor 2022 | https://dx.doi.org/10.21037/gist-22-11 The recent application of artificial intelligence (AI) in the field of gastroenterology has shown promising results in the diagnosis and management of digestive diseases (1-3). Solutions such as AI-powered detection and diagnosis systems are now commercially available for colorectal polyps (4). The backbone of AI systems for image classification is the convolutional neural network (CNN), a deep-learning algorithm that conducts multi-level image analysis through pattern recognition, and improves its own diagnostic ability by training with large datasets (5,6). With the ability to integrate pixel-level data, CNN is able to aid endoscopists in the rapid interpretation of seemingly ambiguous visual data. One such area of diagnostic dilemma is the evaluation of gastric subepithelial lesions (SELs). While endoscopic ultrasound (EUS) is the most accurate imaging modality, there are no definitive EUS features to differentiate gastrointestinal stromal tumors (GISTs) from the commonly encountered gastrointestinal leiomyomas (GILs) (7-9). Misdiagnosis of GISTs and GILs are thought to comprise the majority of incorrect EUS diagnoses (9). Given the malignant potential of GISTs, it is crucial to accurately diagnose these lesions and to differentiate them from GILs, which are benign. The current standard is to differentiate these two by obtaining tissue samples with fine-needle aspiration or biopsy (EUS-FNA/B). However, FNA/B is invasive and is reported to have a lower diagnostic rate for SELs smaller than 20 mm (9,10). In this issue of Endoscopy, Yang et al. report the result of their AI-powered EUS model for differentiation between GISTs and GILs (11). Using a CNN for image recognition, the AI model was trained, validated, and evaluated on a total of 10,439 EUS images from 752 patients with histologically confirmed GISTs and GILs from four endoscopic centers, collected in aggregate from 2013 to 2020. They report a significantly higher diagnostic accuracy with the AI model compared with the expert endo-sonographer (94.0% vs. 70.2%, P value <0.001). More importantly, in the prospective evaluation of 508 consecutive patients with SELs, of whom 132 underwent histologic confirmation, the diagnostic accuracy remained significantly higher with the AI-powered EUS compared with the expert endosonographer (78.8% vs. 69.7%, P value =0.01). When examining only cases of histologically-confirmed GISTS or GILs, AI-joint diagnosis also had significantly higher accuracy, specificity, and PPV at 92.2%, 95.1%, and 94.1%, respectively, compared to individual diagnosis alone at 76.6% (P value =0.01), 65.9% (P value =0.002), and 69.6% (P value <0.01), respectively. The sensitivity and NPV of AI-joint diagnosis were similar to individual diagnosis. These are promising results in applying deep learning to real-time EUS to distinguish between GISTs and GILs. Previous studies have reported an improved diagnostic Editorial Commentary","PeriodicalId":93755,"journal":{"name":"Gastrointestinal stromal tumor","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Could artificial intelligence replace fine-needle aspiration in endoscopic ultrasound?\",\"authors\":\"S. Jiang, N. Parsa, M. Byrne\",\"doi\":\"10.21037/gist-22-11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"© Gastrointestinal Stromal Tumor. All rights reserved. Gastrointest Stromal Tumor 2022 | https://dx.doi.org/10.21037/gist-22-11 The recent application of artificial intelligence (AI) in the field of gastroenterology has shown promising results in the diagnosis and management of digestive diseases (1-3). Solutions such as AI-powered detection and diagnosis systems are now commercially available for colorectal polyps (4). The backbone of AI systems for image classification is the convolutional neural network (CNN), a deep-learning algorithm that conducts multi-level image analysis through pattern recognition, and improves its own diagnostic ability by training with large datasets (5,6). With the ability to integrate pixel-level data, CNN is able to aid endoscopists in the rapid interpretation of seemingly ambiguous visual data. One such area of diagnostic dilemma is the evaluation of gastric subepithelial lesions (SELs). While endoscopic ultrasound (EUS) is the most accurate imaging modality, there are no definitive EUS features to differentiate gastrointestinal stromal tumors (GISTs) from the commonly encountered gastrointestinal leiomyomas (GILs) (7-9). Misdiagnosis of GISTs and GILs are thought to comprise the majority of incorrect EUS diagnoses (9). Given the malignant potential of GISTs, it is crucial to accurately diagnose these lesions and to differentiate them from GILs, which are benign. The current standard is to differentiate these two by obtaining tissue samples with fine-needle aspiration or biopsy (EUS-FNA/B). However, FNA/B is invasive and is reported to have a lower diagnostic rate for SELs smaller than 20 mm (9,10). In this issue of Endoscopy, Yang et al. report the result of their AI-powered EUS model for differentiation between GISTs and GILs (11). Using a CNN for image recognition, the AI model was trained, validated, and evaluated on a total of 10,439 EUS images from 752 patients with histologically confirmed GISTs and GILs from four endoscopic centers, collected in aggregate from 2013 to 2020. They report a significantly higher diagnostic accuracy with the AI model compared with the expert endo-sonographer (94.0% vs. 70.2%, P value <0.001). More importantly, in the prospective evaluation of 508 consecutive patients with SELs, of whom 132 underwent histologic confirmation, the diagnostic accuracy remained significantly higher with the AI-powered EUS compared with the expert endosonographer (78.8% vs. 69.7%, P value =0.01). When examining only cases of histologically-confirmed GISTS or GILs, AI-joint diagnosis also had significantly higher accuracy, specificity, and PPV at 92.2%, 95.1%, and 94.1%, respectively, compared to individual diagnosis alone at 76.6% (P value =0.01), 65.9% (P value =0.002), and 69.6% (P value <0.01), respectively. The sensitivity and NPV of AI-joint diagnosis were similar to individual diagnosis. These are promising results in applying deep learning to real-time EUS to distinguish between GISTs and GILs. Previous studies have reported an improved diagnostic Editorial Commentary\",\"PeriodicalId\":93755,\"journal\":{\"name\":\"Gastrointestinal stromal tumor\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Gastrointestinal stromal tumor\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21037/gist-22-11\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Gastrointestinal stromal tumor","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/gist-22-11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Could artificial intelligence replace fine-needle aspiration in endoscopic ultrasound?
© Gastrointestinal Stromal Tumor. All rights reserved. Gastrointest Stromal Tumor 2022 | https://dx.doi.org/10.21037/gist-22-11 The recent application of artificial intelligence (AI) in the field of gastroenterology has shown promising results in the diagnosis and management of digestive diseases (1-3). Solutions such as AI-powered detection and diagnosis systems are now commercially available for colorectal polyps (4). The backbone of AI systems for image classification is the convolutional neural network (CNN), a deep-learning algorithm that conducts multi-level image analysis through pattern recognition, and improves its own diagnostic ability by training with large datasets (5,6). With the ability to integrate pixel-level data, CNN is able to aid endoscopists in the rapid interpretation of seemingly ambiguous visual data. One such area of diagnostic dilemma is the evaluation of gastric subepithelial lesions (SELs). While endoscopic ultrasound (EUS) is the most accurate imaging modality, there are no definitive EUS features to differentiate gastrointestinal stromal tumors (GISTs) from the commonly encountered gastrointestinal leiomyomas (GILs) (7-9). Misdiagnosis of GISTs and GILs are thought to comprise the majority of incorrect EUS diagnoses (9). Given the malignant potential of GISTs, it is crucial to accurately diagnose these lesions and to differentiate them from GILs, which are benign. The current standard is to differentiate these two by obtaining tissue samples with fine-needle aspiration or biopsy (EUS-FNA/B). However, FNA/B is invasive and is reported to have a lower diagnostic rate for SELs smaller than 20 mm (9,10). In this issue of Endoscopy, Yang et al. report the result of their AI-powered EUS model for differentiation between GISTs and GILs (11). Using a CNN for image recognition, the AI model was trained, validated, and evaluated on a total of 10,439 EUS images from 752 patients with histologically confirmed GISTs and GILs from four endoscopic centers, collected in aggregate from 2013 to 2020. They report a significantly higher diagnostic accuracy with the AI model compared with the expert endo-sonographer (94.0% vs. 70.2%, P value <0.001). More importantly, in the prospective evaluation of 508 consecutive patients with SELs, of whom 132 underwent histologic confirmation, the diagnostic accuracy remained significantly higher with the AI-powered EUS compared with the expert endosonographer (78.8% vs. 69.7%, P value =0.01). When examining only cases of histologically-confirmed GISTS or GILs, AI-joint diagnosis also had significantly higher accuracy, specificity, and PPV at 92.2%, 95.1%, and 94.1%, respectively, compared to individual diagnosis alone at 76.6% (P value =0.01), 65.9% (P value =0.002), and 69.6% (P value <0.01), respectively. The sensitivity and NPV of AI-joint diagnosis were similar to individual diagnosis. These are promising results in applying deep learning to real-time EUS to distinguish between GISTs and GILs. Previous studies have reported an improved diagnostic Editorial Commentary