Mainak Biswas , Luca Saba , Mannudeep Kalra , Rajesh Singh , J. Fernandes e Fernandes , Vijay Viswanathan , John R. Laird , Laura E. Mantella , Amer M. Johri , Mostafa M. Fouda , Jasjit S. Suri
{"title":"MultiNet 2.0:基于注意力的轻量级深度学习网络,用于颈动脉超声扫描和心血管风险评估中的狭窄测量。","authors":"Mainak Biswas , Luca Saba , Mannudeep Kalra , Rajesh Singh , J. Fernandes e Fernandes , Vijay Viswanathan , John R. Laird , Laura E. Mantella , Amer M. Johri , Mostafa M. Fouda , Jasjit S. Suri","doi":"10.1016/j.compmedimag.2024.102437","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.</div></div><div><h3>Methodology</h3><div>We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.</div></div><div><h3>Results</h3><div>The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients’ detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively.</div><div>The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"117 ","pages":"Article 102437"},"PeriodicalIF":5.4000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MultiNet 2.0: A lightweight attention-based deep learning network for stenosis measurement in carotid ultrasound scans and cardiovascular risk assessment\",\"authors\":\"Mainak Biswas , Luca Saba , Mannudeep Kalra , Rajesh Singh , J. Fernandes e Fernandes , Vijay Viswanathan , John R. Laird , Laura E. Mantella , Amer M. Johri , Mostafa M. Fouda , Jasjit S. Suri\",\"doi\":\"10.1016/j.compmedimag.2024.102437\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><div>Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.</div></div><div><h3>Methodology</h3><div>We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.</div></div><div><h3>Results</h3><div>The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients’ detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively.</div><div>The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"117 \",\"pages\":\"Article 102437\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611124001149\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124001149","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
MultiNet 2.0: A lightweight attention-based deep learning network for stenosis measurement in carotid ultrasound scans and cardiovascular risk assessment
Background
Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.
Methodology
We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.
Results
The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients’ detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively.
The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.