J. Naser, E. Lee, S. Pislaru, Gal Tsaban, Jeffrey G Malins, John I Jackson, D. Anisuzzaman, Behrouz Rostami, Francisco Lopez-Jimenez, Paul A. Friedman, Garvan C. Kane, Patricia A. Pellikka, Z. Attia
{"title":"基于人工智能的超声心动图视图分类","authors":"J. Naser, E. Lee, S. Pislaru, Gal Tsaban, Jeffrey G Malins, John I Jackson, D. Anisuzzaman, Behrouz Rostami, Francisco Lopez-Jimenez, Paul A. Friedman, Garvan C. Kane, Patricia A. Pellikka, Z. Attia","doi":"10.1093/ehjdh/ztae015","DOIUrl":null,"url":null,"abstract":"\n \n \n Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram.\n \n \n \n We trained 2- and 3-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify 9 view categories [10,269 videos]. TTE studies from 229 patients were used in internal validation [2,582 videos]. CNNs were tested on 100 patients with comprehensive TTE studies [where the 2 examples chosen by CNNs as most likely to represent a view were evaluated] and 408 patients with five view categories obtained via point of care ultrasound (POCUS).\n \n \n \n The overall accuracy of the 2-dimensional CNN was 96.8% and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the 3-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with 2- rather than 3-dimensional networks, exceeding 93% in apical, short axis aortic valve, and parasternal long axis left ventricle views.\n \n \n \n An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.\n","PeriodicalId":508387,"journal":{"name":"European Heart Journal - Digital Health","volume":"16 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence-based classification of echocardiographic views\",\"authors\":\"J. Naser, E. Lee, S. Pislaru, Gal Tsaban, Jeffrey G Malins, John I Jackson, D. Anisuzzaman, Behrouz Rostami, Francisco Lopez-Jimenez, Paul A. Friedman, Garvan C. Kane, Patricia A. Pellikka, Z. Attia\",\"doi\":\"10.1093/ehjdh/ztae015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n \\n \\n Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram.\\n \\n \\n \\n We trained 2- and 3-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify 9 view categories [10,269 videos]. TTE studies from 229 patients were used in internal validation [2,582 videos]. CNNs were tested on 100 patients with comprehensive TTE studies [where the 2 examples chosen by CNNs as most likely to represent a view were evaluated] and 408 patients with five view categories obtained via point of care ultrasound (POCUS).\\n \\n \\n \\n The overall accuracy of the 2-dimensional CNN was 96.8% and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the 3-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with 2- rather than 3-dimensional networks, exceeding 93% in apical, short axis aortic valve, and parasternal long axis left ventricle views.\\n \\n \\n \\n An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.\\n\",\"PeriodicalId\":508387,\"journal\":{\"name\":\"European Heart Journal - Digital Health\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Heart Journal - Digital Health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/ehjdh/ztae015\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Heart Journal - Digital Health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ehjdh/ztae015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial intelligence-based classification of echocardiographic views
Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram.
We trained 2- and 3-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify 9 view categories [10,269 videos]. TTE studies from 229 patients were used in internal validation [2,582 videos]. CNNs were tested on 100 patients with comprehensive TTE studies [where the 2 examples chosen by CNNs as most likely to represent a view were evaluated] and 408 patients with five view categories obtained via point of care ultrasound (POCUS).
The overall accuracy of the 2-dimensional CNN was 96.8% and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the 3-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with 2- rather than 3-dimensional networks, exceeding 93% in apical, short axis aortic valve, and parasternal long axis left ventricle views.
An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.