Background: We hypothesize generative adversarial networks (GAN) combined with self-attention (SA) and aggregated residual transformations (ResNeXt) perform better than conventional deep learning models in differentiating hepatocellular carcinoma (HCC). Attention modules facilitate concentrating on salient features and suppressing redundancies, while residual transformations can reuse relevant features. Therefore, we aim to propose a GAN+SA+ResNeXt deep learning model to improve HCC prediction accuracy.
Methods: 228 multiphase CTs from 57 patients were retrospectively analyzed with local IRB's approval, where 30 patients were pathologically confirmed with HCC and the rest 27 were non-HCC. Pre-processing of automatic liver segmentation and Hounsfield unit (HU) normalization was performed, followed by deep learning training with five-fold cross validation in a conventional 3D GAN, a 3D GAN+A, and a 3D GAN+A+ ResNeXt, respectively (training: testing ∼ 4:1). Area under receiver operating characteristics curves (AUROC), accuracy, sensitivity and specificity of HCC prediction were evaluated.
Results: Results showed the proposed method had larger AUROC (95%), better accuracy (91%) and sensitivity (93%) with acceptable specificity (88%) and prediction time (0.04s). Deep GAN with attentions and residual transformations for HCC diagnosis using multiphase CT is feasible and favorable with improved accuracy and efficiency, which harbors clinical potentials in differentiating HCC from other benign or malignant liver lesions.
扫码关注我们
求助内容:
应助结果提醒方式:
