Diabetic retinopathy (DR) is a retinal disease caused by diabetes. If there is no intervention, it may even lead to blindness. Therefore, the detection of diabetic retinopathy is of great significance for preventing blindness in patients. Most of the existing DR detection methods use supervised methods, which usually require a large number of accurate pixel-level annotations. To solve this problem, we propose a self-supervised Equivariant Refinement Classification Network (ERCN) for DR classification. First, we use an unsupervised contrast pre-training network to learn a more generalized representation. Secondly, the class activation map (CAM) is refined by self-supervision learning. It first uses a spatial masking method to suppress low-confidence predictions, and then uses the feature similarity between pixels to encourage fine-grained activation to achieve more accurate positioning of the lesion. We propose a hybrid equivariant regularization loss to alleviate the degradation caused by the local minimum in the CAM refinement process. To further improve the classification accuracy, we propose an attention-based multi-instance learning (MIL), which weights each element of the feature map as an instance, which is more effective than the traditional patch-based instance extraction method. We evaluate our method on the EyePACS and DAVIS datasets and achieved 87.4% test accuracy in the EyePACS dataset and 88.7% test accuracy in the DAVIS dataset. It shows that the proposed method achieves better performance in DR detection compared with other state-of-the-art methods in self-supervised DR detection.
糖尿病视网膜病变(DR)是一种由糖尿病引起的视网膜疾病。如果不加以干预,甚至可能导致失明。因此,检测糖尿病视网膜病变对防止患者失明具有重要意义。现有的糖尿病视网膜病变检测方法大多采用有监督的方法,通常需要大量精确的像素级注释。为了解决这个问题,我们提出了一种用于 DR 分类的自监督等变量细化分类网络(ERCN)。首先,我们使用一个无监督对比度预训练网络来学习一个更具概括性的表征。其次,通过自我监督学习来完善类激活图(CAM)。它首先使用空间掩蔽方法抑制低置信度预测,然后利用像素间的特征相似性鼓励细粒度激活,以实现更准确的病变定位。我们提出了一种混合等变正则化损失,以减轻 CAM 细化过程中局部最小值造成的质量下降。为了进一步提高分类精度,我们提出了基于注意力的多实例学习(MIL)方法,该方法将特征图中的每个元素加权为一个实例,比传统的基于斑块的实例提取方法更有效。我们在 EyePACS 和 DAVIS 数据集上评估了我们的方法,在 EyePACS 数据集上取得了 87.4% 的测试准确率,在 DAVIS 数据集上取得了 88.7% 的测试准确率。这表明,与其他最先进的自监督 DR 检测方法相比,所提出的方法在 DR 检测中取得了更好的性能。
{"title":"A Self-Supervised Equivariant Refinement Classification Network for Diabetic Retinopathy Classification.","authors":"Jiacheng Fan, Tiejun Yang, Heng Wang, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao","doi":"10.1007/s10278-024-01270-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01270-z","url":null,"abstract":"<p><p>Diabetic retinopathy (DR) is a retinal disease caused by diabetes. If there is no intervention, it may even lead to blindness. Therefore, the detection of diabetic retinopathy is of great significance for preventing blindness in patients. Most of the existing DR detection methods use supervised methods, which usually require a large number of accurate pixel-level annotations. To solve this problem, we propose a self-supervised Equivariant Refinement Classification Network (ERCN) for DR classification. First, we use an unsupervised contrast pre-training network to learn a more generalized representation. Secondly, the class activation map (CAM) is refined by self-supervision learning. It first uses a spatial masking method to suppress low-confidence predictions, and then uses the feature similarity between pixels to encourage fine-grained activation to achieve more accurate positioning of the lesion. We propose a hybrid equivariant regularization loss to alleviate the degradation caused by the local minimum in the CAM refinement process. To further improve the classification accuracy, we propose an attention-based multi-instance learning (MIL), which weights each element of the feature map as an instance, which is more effective than the traditional patch-based instance extraction method. We evaluate our method on the EyePACS and DAVIS datasets and achieved 87.4% test accuracy in the EyePACS dataset and 88.7% test accuracy in the DAVIS dataset. It shows that the proposed method achieves better performance in DR detection compared with other state-of-the-art methods in self-supervised DR detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1007/s10278-024-01273-w
Devina Chatterjee, Adway Kanhere, Florence X Doo, Jerry Zhao, Andrew Chan, Alexander Welsh, Pranav Kulkarni, Annie Trang, Vishwa S Parekh, Paul H Yi
Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).
{"title":"Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population.","authors":"Devina Chatterjee, Adway Kanhere, Florence X Doo, Jerry Zhao, Andrew Chan, Alexander Welsh, Pranav Kulkarni, Annie Trang, Vishwa S Parekh, Paul H Yi","doi":"10.1007/s10278-024-01273-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01273-w","url":null,"abstract":"<p><p>Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1007/s10278-024-01234-3
Junqing Mei, Chang Chen, Ruoting Liu, Hongbing Ma
To compare the image quality and fat attenuation index (FAI) of coronary artery CT angiography (CCTA) under different tube voltages between deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction V (ASIR-V). Three hundred one patients who underwent CCTA with automatic tube current modulation were prospectively enrolled and divided into two groups: 120 kV group and low tube voltage group. Images were reconstructed using ASIR-V level 50% (ASIR-V50%) and high-strength DLIR (DLIR-H). In the low tube voltage group, the voltage was selected according to Chinese BMI classification: 70 kV (BMI < 24 kg/m2), 80 kV (24 kg/m2 ≤ BMI < 28 kg/m2), 100 kV (BMI ≥ 28 kg/m2). At the same tube voltage, the subjective and objective image quality, edge rise distance (ERD), and FAI between different algorithms were compared. Under different tube voltages, we used DLIR-H to compare the differences between subjective, objective image quality, and ERD. Compared with the 120 kV group, the DLIR-H image noise of 70 kV, 80 kV, and 100 kV groups increased by 36%, 25%, and 12%, respectively (all P < 0.001); contrast-to-noise ratio (CNR), subjective score, and ERD were similar (all P > 0.05). In the 70 kV, 80 kV, 100 kV, and 120 kV groups, compared with ASIR-V50%, DLIR-H image noise decreased by 50%, 53%, 47%, and 38-50%, respectively; CNR, subjective score, and FAI value increased significantly (all P < 0.001), ERD decreased. Compared with 120 kV tube voltage, the combination of DLIR-H and low tube voltage maintains image quality. At the same tube voltage, compared with ASIR-V, DLIR-H improves image quality and FAI value.
{"title":"Effect of Deep Learning Image Reconstruction on Image Quality and Pericoronary Fat Attenuation Index.","authors":"Junqing Mei, Chang Chen, Ruoting Liu, Hongbing Ma","doi":"10.1007/s10278-024-01234-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01234-3","url":null,"abstract":"<p><p>To compare the image quality and fat attenuation index (FAI) of coronary artery CT angiography (CCTA) under different tube voltages between deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction V (ASIR-V). Three hundred one patients who underwent CCTA with automatic tube current modulation were prospectively enrolled and divided into two groups: 120 kV group and low tube voltage group. Images were reconstructed using ASIR-V level 50% (ASIR-V50%) and high-strength DLIR (DLIR-H). In the low tube voltage group, the voltage was selected according to Chinese BMI classification: 70 kV (BMI < 24 kg/m<sup>2</sup>), 80 kV (24 kg/m<sup>2</sup> ≤ BMI < 28 kg/m<sup>2</sup>), 100 kV (BMI ≥ 28 kg/m<sup>2</sup>). At the same tube voltage, the subjective and objective image quality, edge rise distance (ERD), and FAI between different algorithms were compared. Under different tube voltages, we used DLIR-H to compare the differences between subjective, objective image quality, and ERD. Compared with the 120 kV group, the DLIR-H image noise of 70 kV, 80 kV, and 100 kV groups increased by 36%, 25%, and 12%, respectively (all P < 0.001); contrast-to-noise ratio (CNR), subjective score, and ERD were similar (all P > 0.05). In the 70 kV, 80 kV, 100 kV, and 120 kV groups, compared with ASIR-V50%, DLIR-H image noise decreased by 50%, 53%, 47%, and 38-50%, respectively; CNR, subjective score, and FAI value increased significantly (all P < 0.001), ERD decreased. Compared with 120 kV tube voltage, the combination of DLIR-H and low tube voltage maintains image quality. At the same tube voltage, compared with ASIR-V, DLIR-H improves image quality and FAI value.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-09DOI: 10.1007/s10278-024-01249-w
Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim
PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.
{"title":"PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration.","authors":"Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim","doi":"10.1007/s10278-024-01249-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01249-w","url":null,"abstract":"<p><p>PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1007/s10278-024-01243-2
Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore
Computer-aided diagnosis (CAD) system assists ophthalmologists in early diabetic retinopathy (DR) detection by automating the analysis of retinal images, enabling timely intervention and treatment. This paper introduces a novel CAD system based on the global and multi-resolution analysis of retinal images. As a first step, we enhance the quality of the retinal images by applying a sequence of preprocessing techniques, which include the median filter, contrast limited adaptive histogram equalization (CLAHE), and the unsharp filter. These preprocessing steps effectively eliminate noise and enhance the contrast in the retinal images. Further, these images are represented at multi-scales using discrete wavelet transform (DWT), and center symmetric local binary pattern (CSLBP) features are extracted from each scale. The extracted CSLBP features from decomposed images capture the fine and coarse details of the retinal fundus images. Also, statistical features are extracted to capture the global characteristics and provide a comprehensive representation of retinal fundus images. The detection performances of these features are evaluated on a benchmark dataset using two machine learning models, i.e., SVM and k-NN, and found that the performance of the proposed work is considerably more encouraging than other existing methods. Furthermore, the results demonstrate that when wavelet-based CSLBP features are combined with statistical features, they yield notably improved detection performance compared to using these features individually.
{"title":"Detection of Diabetic Retinopathy Using Discrete Wavelet-Based Center-Symmetric Local Binary Pattern and Statistical Features.","authors":"Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore","doi":"10.1007/s10278-024-01243-2","DOIUrl":"https://doi.org/10.1007/s10278-024-01243-2","url":null,"abstract":"<p><p>Computer-aided diagnosis (CAD) system assists ophthalmologists in early diabetic retinopathy (DR) detection by automating the analysis of retinal images, enabling timely intervention and treatment. This paper introduces a novel CAD system based on the global and multi-resolution analysis of retinal images. As a first step, we enhance the quality of the retinal images by applying a sequence of preprocessing techniques, which include the median filter, contrast limited adaptive histogram equalization (CLAHE), and the unsharp filter. These preprocessing steps effectively eliminate noise and enhance the contrast in the retinal images. Further, these images are represented at multi-scales using discrete wavelet transform (DWT), and center symmetric local binary pattern (CSLBP) features are extracted from each scale. The extracted CSLBP features from decomposed images capture the fine and coarse details of the retinal fundus images. Also, statistical features are extracted to capture the global characteristics and provide a comprehensive representation of retinal fundus images. The detection performances of these features are evaluated on a benchmark dataset using two machine learning models, i.e., SVM and k-NN, and found that the performance of the proposed work is considerably more encouraging than other existing methods. Furthermore, the results demonstrate that when wavelet-based CSLBP features are combined with statistical features, they yield notably improved detection performance compared to using these features individually.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142142303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To investigate the feasibility of predicting rectal adenocarcinoma (RA) tumor (T) and node (N) staging from an optimal ROI measurement using amide proton transfer weighted-signal intensity (APTw-SI) and magnetization transfer (MT) derived from three-dimensional chemical exchange saturation transfer(3D-CEST). Fifty-eight RA patients with pathological TN staging underwent 3D-CEST and DWI. APTw-SI, MT, and ADC values were measured using three ROI approaches (ss-ROI, ts-ROI, and wt-ROI) to analyze the TN staging (T staging, T1-2 vs T3-4; N staging, N - vs N +); the reproducibility of APTw-SI and MT was also evaluated. The AUC was used to assess the staging performance and determine the optimal ROI strategy. MT and APTw-SI yielded good excellent reproducibility with three ROIs, respectively. Significant differences in MT were observed (all P < 0.05) from various ROIs but not in APTw-SI and ADC (all P > 0.05) in the TN stage. AUCs of MT from ss-ROI were 0.860 (95% CI, 0.743-0.937) and 0.852 (95% CI, 0.735-0.932) for predicting T and N staging, which is similar to ts-ROI (T staging, 0.856 [95% CI, 0.739-0.934]; N staging, 0.831 [95% CI, 0.710-0.917]) and wt-ROI (T staging, 0.833 [95% CI, 0.712-0.918]; N staging, 0.848 [95% CI, 0.729-0.929]) (all P > 0.05). MT value of 3D-CEST has excellent TN staging predictive performance in RA patients with all three kinds of ROI methods. The ss-ROI is easy to operate and could be served as the preferred ROI approach for clinical and research applications of 3D-CEST imaging.
研究利用酰胺质子转移加权信号强度(APTw-SI)和三维化学交换饱和转移(3D-CEST)产生的磁化转移(MT),通过最佳ROI测量预测直肠腺癌(RA)肿瘤(T)和结节(N)分期的可行性。58例病理TN分期的RA患者接受了3D-CEST和DWI检查。使用三种ROI方法(ss-ROI、ts-ROI和wt-ROI)测量了APTw-SI、MT和ADC值,以分析TN分期(T分期,T1-2 vs T3-4;N分期,N - vs N +);还评估了APTw-SI和MT的可重复性。AUC用于评估分期效果并确定最佳ROI策略。MT 和 APTw-SI 分别通过三个 ROI 获得了良好的再现性。在 TN 阶段观察到 MT 的显著差异(均为 P 0.05)。ss-ROI的MT预测T和N分期的AUC分别为0.860(95% CI,0.743-0.937)和0.852(95% CI,0.735-0.932),与ts-ROI相似(T分期,0.856 [95% CI,0.739-0.934];N 分期,0.831 [95% CI,0.710-0.917])和 wt-ROI(T 分期,0.833 [95% CI,0.712-0.918];N 分期,0.848 [95% CI,0.729-0.929])相似(均 P >0.05)。在所有三种ROI方法中,3D-CEST的MT值对RA患者的TN分期预测效果都很好。ss-ROI操作简便,可作为3D-CEST成像临床和研究应用的首选ROI方法。
{"title":"Feasibility of Three-Dimension Chemical Exchange Saturation Transfer MRI for Predicting Tumor and Node Staging in Rectal Adenocarcinoma: An Exploration of Optimal ROI Measurement.","authors":"Xiao Wang, Wenguang Liu, Ismail Bilal Masokano, Weiyin Vivian Liu, Yigang Pei, Wenzheng Li","doi":"10.1007/s10278-024-01029-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01029-6","url":null,"abstract":"<p><p>To investigate the feasibility of predicting rectal adenocarcinoma (RA) tumor (T) and node (N) staging from an optimal ROI measurement using amide proton transfer weighted-signal intensity (APTw-SI) and magnetization transfer (MT) derived from three-dimensional chemical exchange saturation transfer(3D-CEST). Fifty-eight RA patients with pathological TN staging underwent 3D-CEST and DWI. APTw-SI, MT, and ADC values were measured using three ROI approaches (ss-ROI, ts-ROI, and wt-ROI) to analyze the TN staging (T staging, T1-2 vs T3-4; N staging, N - vs N +); the reproducibility of APTw-SI and MT was also evaluated. The AUC was used to assess the staging performance and determine the optimal ROI strategy. MT and APTw-SI yielded good excellent reproducibility with three ROIs, respectively. Significant differences in MT were observed (all P < 0.05) from various ROIs but not in APTw-SI and ADC (all P > 0.05) in the TN stage. AUCs of MT from ss-ROI were 0.860 (95% CI, 0.743-0.937) and 0.852 (95% CI, 0.735-0.932) for predicting T and N staging, which is similar to ts-ROI (T staging, 0.856 [95% CI, 0.739-0.934]; N staging, 0.831 [95% CI, 0.710-0.917]) and wt-ROI (T staging, 0.833 [95% CI, 0.712-0.918]; N staging, 0.848 [95% CI, 0.729-0.929]) (all P > 0.05). MT value of 3D-CEST has excellent TN staging predictive performance in RA patients with all three kinds of ROI methods. The ss-ROI is easy to operate and could be served as the preferred ROI approach for clinical and research applications of 3D-CEST imaging.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142142304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s10278-024-01224-5
Banavathu Sridevi, B John Jaidhan
The precise delineation of the pancreas from clinical images poses a substantial obstacle in the realm of medical image analysis and surgical procedures. Challenges arise from the complexities of clinical image analysis and complications in clinical practice related to the pancreas. To tackle these challenges, a novel approach called the Spatial Horned Lizard Attention Approach (SHLAM) has been developed. As a result, a preprocessing function has been developed to examine and eliminate noise barriers from the trained MRI data. Furthermore, an assessment of the current attributes is conducted, followed by the identification of essential elements for forecasting the impacted region. Once the affected region has been identified, the images undergo segmentation. Furthermore, it is crucial to emphasize that the present study assigns 80% of the data for training and 20% for testing purposes. The optimal parameters were assessed based on precision, accuracy, recall, F-measure, error rate, Dice, and Jaccard. The performance improvement has been demonstrated by validating the method on various existing models. The SHLAM method proposed demonstrated an accuracy rate of 99.6%, surpassing that of all alternative methods.
{"title":"Optimized Spatial Transformer for Segmenting Pancreas Abnormalities.","authors":"Banavathu Sridevi, B John Jaidhan","doi":"10.1007/s10278-024-01224-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01224-5","url":null,"abstract":"<p><p>The precise delineation of the pancreas from clinical images poses a substantial obstacle in the realm of medical image analysis and surgical procedures. Challenges arise from the complexities of clinical image analysis and complications in clinical practice related to the pancreas. To tackle these challenges, a novel approach called the Spatial Horned Lizard Attention Approach (SHLAM) has been developed. As a result, a preprocessing function has been developed to examine and eliminate noise barriers from the trained MRI data. Furthermore, an assessment of the current attributes is conducted, followed by the identification of essential elements for forecasting the impacted region. Once the affected region has been identified, the images undergo segmentation. Furthermore, it is crucial to emphasize that the present study assigns 80% of the data for training and 20% for testing purposes. The optimal parameters were assessed based on precision, accuracy, recall, F-measure, error rate, Dice, and Jaccard. The performance improvement has been demonstrated by validating the method on various existing models. The SHLAM method proposed demonstrated an accuracy rate of 99.6%, surpassing that of all alternative methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s10278-024-01246-z
Ameena Elahi, Nikki Fennell, Liana Watson
{"title":"Correction: Certified Imaging Informatics Professionals (CIIP) Demonstrate Value to the Healthcare Industry and Focus on Quality Through the ABII 10-Year Requirements Practice Option.","authors":"Ameena Elahi, Nikki Fennell, Liana Watson","doi":"10.1007/s10278-024-01246-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01246-z","url":null,"abstract":"","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Acute radiation dermatitis (ARD) is a common and distressing issue for cancer patients undergoing radiation therapy, leading to significant morbidity. Despite available treatments, ARD remains a distressing issue, necessitating further research to improve prevention and management strategies. Moreover, the lack of biomarkers for early quantitative assessment of ARD impedes progress in this area. This study aims to investigate the detection of ARD using intensity-based and novel features of Optical Coherence Tomography (OCT) images, combined with machine learning. Imaging sessions were conducted twice weekly on twenty-two patients at six neck locations throughout their radiation treatment, with ARD severity graded by an expert oncologist. We compared a traditional feature-based machine learning technique with a deep learning late-fusion approach to classify normal skin vs. ARD using a dataset of 1487 images. The dataset analysis demonstrates that the deep learning approach outperformed traditional machine learning, achieving an accuracy of 88%. These findings offer a promising foundation for future research aimed at developing a quantitative assessment tool to enhance the management of ARD.
{"title":"Feature-Based vs. Deep-Learning Fusion Methods for the In Vivo Detection of Radiation Dermatitis Using Optical Coherence Tomography, a Feasibility Study.","authors":"Christos Photiou, Constantina Cloconi, Iosif Strouthos","doi":"10.1007/s10278-024-01241-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01241-4","url":null,"abstract":"<p><p>Acute radiation dermatitis (ARD) is a common and distressing issue for cancer patients undergoing radiation therapy, leading to significant morbidity. Despite available treatments, ARD remains a distressing issue, necessitating further research to improve prevention and management strategies. Moreover, the lack of biomarkers for early quantitative assessment of ARD impedes progress in this area. This study aims to investigate the detection of ARD using intensity-based and novel features of Optical Coherence Tomography (OCT) images, combined with machine learning. Imaging sessions were conducted twice weekly on twenty-two patients at six neck locations throughout their radiation treatment, with ARD severity graded by an expert oncologist. We compared a traditional feature-based machine learning technique with a deep learning late-fusion approach to classify normal skin vs. ARD using a dataset of 1487 images. The dataset analysis demonstrates that the deep learning approach outperformed traditional machine learning, achieving an accuracy of 88%. These findings offer a promising foundation for future research aimed at developing a quantitative assessment tool to enhance the management of ARD.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s10278-024-01248-x
Giorgio Cazzaniga, Fabio Del Carro, Albino Eccher, Jan Ulrich Becker, Giovanni Gambaro, Mattia Rossi, Federico Pieruzzi, Filippo Fraggetta, Fabio Pagni, Vincenzo L'Imperio
The development of reliable artificial intelligence (AI) algorithms in pathology often depends on ground truth provided by annotation of whole slide images (WSI), a time-consuming and operator-dependent process. A comparative analysis of different annotation approaches is performed to streamline this process. Two pathologists annotated renal tissue using semi-automated (Segment Anything Model, SAM)) and manual devices (touchpad vs mouse). A comparison was conducted in terms of working time, reproducibility (overlap fraction), and precision (0 to 10 accuracy rated by two expert nephropathologists) among different methods and operators. The impact of different displays on mouse performance was evaluated. Annotations focused on three tissue compartments: tubules (57 annotations), glomeruli (53 annotations), and arteries (58 annotations). The semi-automatic approach was the fastest and had the least inter-observer variability, averaging 13.6 ± 0.2 min with a difference (Δ) of 2%, followed by the mouse (29.9 ± 10.2, Δ = 24%), and the touchpad (47.5 ± 19.6 min, Δ = 45%). The highest reproducibility in tubules and glomeruli was achieved with SAM (overlap values of 1 and 0.99 compared to 0.97 for the mouse and 0.94 and 0.93 for the touchpad), though SAM had lower reproducibility in arteries (overlap value of 0.89 compared to 0.94 for both the mouse and touchpad). No precision differences were observed between operators (p = 0.59). Using non-medical monitors increased annotation times by 6.1%. The future employment of semi-automated and AI-assisted approaches can significantly speed up the annotation process, improving the ground truth for AI tool development.
{"title":"Improving the Annotation Process in Computational Pathology: A Pilot Study with Manual and Semi-automated Approaches on Consumer and Medical Grade Devices.","authors":"Giorgio Cazzaniga, Fabio Del Carro, Albino Eccher, Jan Ulrich Becker, Giovanni Gambaro, Mattia Rossi, Federico Pieruzzi, Filippo Fraggetta, Fabio Pagni, Vincenzo L'Imperio","doi":"10.1007/s10278-024-01248-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01248-x","url":null,"abstract":"<p><p>The development of reliable artificial intelligence (AI) algorithms in pathology often depends on ground truth provided by annotation of whole slide images (WSI), a time-consuming and operator-dependent process. A comparative analysis of different annotation approaches is performed to streamline this process. Two pathologists annotated renal tissue using semi-automated (Segment Anything Model, SAM)) and manual devices (touchpad vs mouse). A comparison was conducted in terms of working time, reproducibility (overlap fraction), and precision (0 to 10 accuracy rated by two expert nephropathologists) among different methods and operators. The impact of different displays on mouse performance was evaluated. Annotations focused on three tissue compartments: tubules (57 annotations), glomeruli (53 annotations), and arteries (58 annotations). The semi-automatic approach was the fastest and had the least inter-observer variability, averaging 13.6 ± 0.2 min with a difference (Δ) of 2%, followed by the mouse (29.9 ± 10.2, Δ = 24%), and the touchpad (47.5 ± 19.6 min, Δ = 45%). The highest reproducibility in tubules and glomeruli was achieved with SAM (overlap values of 1 and 0.99 compared to 0.97 for the mouse and 0.94 and 0.93 for the touchpad), though SAM had lower reproducibility in arteries (overlap value of 0.89 compared to 0.94 for both the mouse and touchpad). No precision differences were observed between operators (p = 0.59). Using non-medical monitors increased annotation times by 6.1%. The future employment of semi-automated and AI-assisted approaches can significantly speed up the annotation process, improving the ground truth for AI tool development.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}