首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
A Self-Supervised Equivariant Refinement Classification Network for Diabetic Retinopathy Classification. 用于糖尿病视网膜病变分类的自监督等变量细化分类网络
Pub Date : 2024-09-19 DOI: 10.1007/s10278-024-01270-z
Jiacheng Fan, Tiejun Yang, Heng Wang, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao

Diabetic retinopathy (DR) is a retinal disease caused by diabetes. If there is no intervention, it may even lead to blindness. Therefore, the detection of diabetic retinopathy is of great significance for preventing blindness in patients. Most of the existing DR detection methods use supervised methods, which usually require a large number of accurate pixel-level annotations. To solve this problem, we propose a self-supervised Equivariant Refinement Classification Network (ERCN) for DR classification. First, we use an unsupervised contrast pre-training network to learn a more generalized representation. Secondly, the class activation map (CAM) is refined by self-supervision learning. It first uses a spatial masking method to suppress low-confidence predictions, and then uses the feature similarity between pixels to encourage fine-grained activation to achieve more accurate positioning of the lesion. We propose a hybrid equivariant regularization loss to alleviate the degradation caused by the local minimum in the CAM refinement process. To further improve the classification accuracy, we propose an attention-based multi-instance learning (MIL), which weights each element of the feature map as an instance, which is more effective than the traditional patch-based instance extraction method. We evaluate our method on the EyePACS and DAVIS datasets and achieved 87.4% test accuracy in the EyePACS dataset and 88.7% test accuracy in the DAVIS dataset. It shows that the proposed method achieves better performance in DR detection compared with other state-of-the-art methods in self-supervised DR detection.

糖尿病视网膜病变(DR)是一种由糖尿病引起的视网膜疾病。如果不加以干预,甚至可能导致失明。因此,检测糖尿病视网膜病变对防止患者失明具有重要意义。现有的糖尿病视网膜病变检测方法大多采用有监督的方法,通常需要大量精确的像素级注释。为了解决这个问题,我们提出了一种用于 DR 分类的自监督等变量细化分类网络(ERCN)。首先,我们使用一个无监督对比度预训练网络来学习一个更具概括性的表征。其次,通过自我监督学习来完善类激活图(CAM)。它首先使用空间掩蔽方法抑制低置信度预测,然后利用像素间的特征相似性鼓励细粒度激活,以实现更准确的病变定位。我们提出了一种混合等变正则化损失,以减轻 CAM 细化过程中局部最小值造成的质量下降。为了进一步提高分类精度,我们提出了基于注意力的多实例学习(MIL)方法,该方法将特征图中的每个元素加权为一个实例,比传统的基于斑块的实例提取方法更有效。我们在 EyePACS 和 DAVIS 数据集上评估了我们的方法,在 EyePACS 数据集上取得了 87.4% 的测试准确率,在 DAVIS 数据集上取得了 88.7% 的测试准确率。这表明,与其他最先进的自监督 DR 检测方法相比,所提出的方法在 DR 检测中取得了更好的性能。
{"title":"A Self-Supervised Equivariant Refinement Classification Network for Diabetic Retinopathy Classification.","authors":"Jiacheng Fan, Tiejun Yang, Heng Wang, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao","doi":"10.1007/s10278-024-01270-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01270-z","url":null,"abstract":"<p><p>Diabetic retinopathy (DR) is a retinal disease caused by diabetes. If there is no intervention, it may even lead to blindness. Therefore, the detection of diabetic retinopathy is of great significance for preventing blindness in patients. Most of the existing DR detection methods use supervised methods, which usually require a large number of accurate pixel-level annotations. To solve this problem, we propose a self-supervised Equivariant Refinement Classification Network (ERCN) for DR classification. First, we use an unsupervised contrast pre-training network to learn a more generalized representation. Secondly, the class activation map (CAM) is refined by self-supervision learning. It first uses a spatial masking method to suppress low-confidence predictions, and then uses the feature similarity between pixels to encourage fine-grained activation to achieve more accurate positioning of the lesion. We propose a hybrid equivariant regularization loss to alleviate the degradation caused by the local minimum in the CAM refinement process. To further improve the classification accuracy, we propose an attention-based multi-instance learning (MIL), which weights each element of the feature map as an instance, which is more effective than the traditional patch-based instance extraction method. We evaluate our method on the EyePACS and DAVIS datasets and achieved 87.4% test accuracy in the EyePACS dataset and 88.7% test accuracy in the DAVIS dataset. It shows that the proposed method achieves better performance in DR detection compared with other state-of-the-art methods in self-supervised DR detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population. 儿童不是小大人:解决成人深度学习 CT 器官分割模型在儿科人群中通用性有限的问题。
Pub Date : 2024-09-19 DOI: 10.1007/s10278-024-01273-w
Devina Chatterjee, Adway Kanhere, Florence X Doo, Jerry Zhao, Andrew Chan, Alexander Welsh, Pranav Kulkarni, Annie Trang, Vishwa S Parekh, Paul H Yi

Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).

在成人数据集上开发的深度学习(DL)工具可能无法很好地推广到儿科患者,从而带来潜在的安全风险。我们在儿科 CT 数据集的器官子集上评估了最先进的成人训练 CT 器官分割模型 TotalSegmentator 的性能,并探索了提高儿科分割性能的优化策略。在外部成人数据集(n = 300)和外部儿科数据集(n = 359)的腹部 CT 扫描上对 TotalSegmentator 进行了回顾性评估。通过使用 Mann-Whitney U 检验比较成人和儿童外部数据集的 Dice 分数,对通用性进行量化。然后对两种 DL 优化方法进行了评估:(1) 仅根据儿科数据训练的 3D nnU-Net 模型;(2) 根据儿科病例微调的成人 nnU-Net 模型。我们的结果表明,TotalSegmentator 在儿科与成人 CT 扫描上的总体平均 Dice 分数明显较低(0.73 与 0.81,P 0.2,肾上腺平均 Dice 分数较高;P 0.3,肾上腺平均 Dice 分数较低;P 0.4,肾上腺平均 Dice 分数较高)。
{"title":"Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population.","authors":"Devina Chatterjee, Adway Kanhere, Florence X Doo, Jerry Zhao, Andrew Chan, Alexander Welsh, Pranav Kulkarni, Annie Trang, Vishwa S Parekh, Paul H Yi","doi":"10.1007/s10278-024-01273-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01273-w","url":null,"abstract":"<p><p>Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of Deep Learning Image Reconstruction on Image Quality and Pericoronary Fat Attenuation Index. 深度学习图像重建对图像质量和冠状动脉周围脂肪衰减指数的影响
Pub Date : 2024-09-19 DOI: 10.1007/s10278-024-01234-3
Junqing Mei, Chang Chen, Ruoting Liu, Hongbing Ma

To compare the image quality and fat attenuation index (FAI) of coronary artery CT angiography (CCTA) under different tube voltages between deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction V (ASIR-V). Three hundred one patients who underwent CCTA with automatic tube current modulation were prospectively enrolled and divided into two groups: 120 kV group and low tube voltage group. Images were reconstructed using ASIR-V level 50% (ASIR-V50%) and high-strength DLIR (DLIR-H). In the low tube voltage group, the voltage was selected according to Chinese BMI classification: 70 kV (BMI < 24 kg/m2), 80 kV (24 kg/m2 ≤ BMI < 28 kg/m2), 100 kV (BMI ≥ 28 kg/m2). At the same tube voltage, the subjective and objective image quality, edge rise distance (ERD), and FAI between different algorithms were compared. Under different tube voltages, we used DLIR-H to compare the differences between subjective, objective image quality, and ERD. Compared with the 120 kV group, the DLIR-H image noise of 70 kV, 80 kV, and 100 kV groups increased by 36%, 25%, and 12%, respectively (all P < 0.001); contrast-to-noise ratio (CNR), subjective score, and ERD were similar (all P > 0.05). In the 70 kV, 80 kV, 100 kV, and 120 kV groups, compared with ASIR-V50%, DLIR-H image noise decreased by 50%, 53%, 47%, and 38-50%, respectively; CNR, subjective score, and FAI value increased significantly (all P < 0.001), ERD decreased. Compared with 120 kV tube voltage, the combination of DLIR-H and low tube voltage maintains image quality. At the same tube voltage, compared with ASIR-V, DLIR-H improves image quality and FAI value.

比较深度学习图像重建(DLIR)和自适应统计迭代重建 V(ASIR-V)在不同管电压下冠状动脉 CT 血管造影(CCTA)的图像质量和脂肪衰减指数(FAI)。研究人员前瞻性地选取了 311 名接受自动管电流调节 CCTA 的患者,并将其分为两组:120 kV 组和低管电压组。使用 ASIR-V 级 50%(ASIR-V50%)和高强度 DLIR(DLIR-H)重建图像。在低管电压组中,电压根据中国的 BMI 分级进行选择:70 kV(BMI 2)、80 kV(24 kg/m2 ≤ BMI 2)、100 kV(BMI ≥ 28 kg/m2)。在相同管电压下,比较了不同算法的主观和客观图像质量、边缘上升距离(ERD)和 FAI。在不同的管电压下,我们使用 DLIR-H 比较了主观、客观图像质量和 ERD 的差异。与 120 kV 组相比,70 kV、80 kV 和 100 kV 组的 DLIR-H 图像噪声分别增加了 36%、25% 和 12%(均为 P 0.05)。在 70 kV、80 kV、100 kV 和 120 kV 组中,与 ASIR-V50% 相比,DLIR-H 图像噪声分别降低了 50%、53%、47% 和 38-50%;CNR、主观评分和 FAI 值均显著增加(均 P
{"title":"Effect of Deep Learning Image Reconstruction on Image Quality and Pericoronary Fat Attenuation Index.","authors":"Junqing Mei, Chang Chen, Ruoting Liu, Hongbing Ma","doi":"10.1007/s10278-024-01234-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01234-3","url":null,"abstract":"<p><p>To compare the image quality and fat attenuation index (FAI) of coronary artery CT angiography (CCTA) under different tube voltages between deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction V (ASIR-V). Three hundred one patients who underwent CCTA with automatic tube current modulation were prospectively enrolled and divided into two groups: 120 kV group and low tube voltage group. Images were reconstructed using ASIR-V level 50% (ASIR-V50%) and high-strength DLIR (DLIR-H). In the low tube voltage group, the voltage was selected according to Chinese BMI classification: 70 kV (BMI < 24 kg/m<sup>2</sup>), 80 kV (24 kg/m<sup>2</sup> ≤ BMI < 28 kg/m<sup>2</sup>), 100 kV (BMI ≥ 28 kg/m<sup>2</sup>). At the same tube voltage, the subjective and objective image quality, edge rise distance (ERD), and FAI between different algorithms were compared. Under different tube voltages, we used DLIR-H to compare the differences between subjective, objective image quality, and ERD. Compared with the 120 kV group, the DLIR-H image noise of 70 kV, 80 kV, and 100 kV groups increased by 36%, 25%, and 12%, respectively (all P < 0.001); contrast-to-noise ratio (CNR), subjective score, and ERD were similar (all P > 0.05). In the 70 kV, 80 kV, 100 kV, and 120 kV groups, compared with ASIR-V50%, DLIR-H image noise decreased by 50%, 53%, 47%, and 38-50%, respectively; CNR, subjective score, and FAI value increased significantly (all P < 0.001), ERD decreased. Compared with 120 kV tube voltage, the combination of DLIR-H and low tube voltage maintains image quality. At the same tube voltage, compared with ASIR-V, DLIR-H improves image quality and FAI value.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration. PelviNet:用于增强骨盆图像配准的多代理卷积协作网络
Pub Date : 2024-09-09 DOI: 10.1007/s10278-024-01249-w
Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim

PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.

PelviNet 引入了一种开创性的多代理卷积网络架构,专为增强骨盆图像配准而定制。这一创新框架利用共享卷积层,实现了代理之间的同步学习,确保对复杂的三维骨盆结构进行详尽分析。该架构结合了最大池化、参数化 ReLU 激活和代理特定层,以优化个体和集体决策过程。通信机制可有效汇总这些共享层的输出,使代理能够利用综合智能做出明智的决策。PelviNet 的评估以定量准确度指标和可视化表示为中心,以阐明代理在精确定位最佳地标方面的表现。实证结果表明 PelviNet 优于传统方法,图像平均误差为 2.8 毫米,主体平均误差为 3.2 毫米,平均欧氏距离误差为 3.0 毫米。这些定量结果凸显了该模型在地标识别方面的高效性和精确性,这对于放射治疗等医疗领域至关重要,因为准确的地标识别会对治疗效果产生重大影响。通过可靠地识别关键结构,PelviNet 推进了骨盆图像分析,并为更广泛的医学成像应用提供了潜在的增强功能,标志着计算医疗向前迈出了重要一步。
{"title":"PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration.","authors":"Rguibi Zakaria, Hajami Abdelmajid, Zitouni Dya, Allali Hakim","doi":"10.1007/s10278-024-01249-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01249-w","url":null,"abstract":"<p><p>PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Diabetic Retinopathy Using Discrete Wavelet-Based Center-Symmetric Local Binary Pattern and Statistical Features. 利用基于离散小波的中心对称局部二进制模式和统计特征检测糖尿病视网膜病变
Pub Date : 2024-09-05 DOI: 10.1007/s10278-024-01243-2
Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore

Computer-aided diagnosis (CAD) system assists ophthalmologists in early diabetic retinopathy (DR) detection by automating the analysis of retinal images, enabling timely intervention and treatment. This paper introduces a novel CAD system based on the global and multi-resolution analysis of retinal images. As a first step, we enhance the quality of the retinal images by applying a sequence of preprocessing techniques, which include the median filter, contrast limited adaptive histogram equalization (CLAHE), and the unsharp filter. These preprocessing steps effectively eliminate noise and enhance the contrast in the retinal images. Further, these images are represented at multi-scales using discrete wavelet transform (DWT), and center symmetric local binary pattern (CSLBP) features are extracted from each scale. The extracted CSLBP features from decomposed images capture the fine and coarse details of the retinal fundus images. Also, statistical features are extracted to capture the global characteristics and provide a comprehensive representation of retinal fundus images. The detection performances of these features are evaluated on a benchmark dataset using two machine learning models, i.e., SVM and k-NN, and found that the performance of the proposed work is considerably more encouraging than other existing methods. Furthermore, the results demonstrate that when wavelet-based CSLBP features are combined with statistical features, they yield notably improved detection performance compared to using these features individually.

计算机辅助诊断(CAD)系统通过自动分析视网膜图像,协助眼科医生进行早期糖尿病视网膜病变(DR)检测,从而实现及时干预和治疗。本文介绍了一种基于全局和多分辨率视网膜图像分析的新型 CAD 系统。首先,我们通过应用一系列预处理技术来提高视网膜图像的质量,这些技术包括中值滤波器、对比度受限自适应直方图均衡化(CLAHE)和非锐化滤波器。这些预处理步骤能有效消除噪音,增强视网膜图像的对比度。此外,这些图像使用离散小波变换(DWT)进行多尺度表示,并从每个尺度提取中心对称局部二元模式(CSLBP)特征。从分解图像中提取的 CSLBP 特征可捕捉视网膜眼底图像的精细和粗略细节。此外,还提取了统计特征,以捕捉全局特征并提供视网膜眼底图像的综合表征。使用 SVM 和 k-NN 两种机器学习模型在基准数据集上对这些特征的检测性能进行了评估,结果发现,与其他现有方法相比,拟议工作的性能更加令人鼓舞。此外,结果表明,当基于小波的 CSLBP 特征与统计特征相结合时,与单独使用这些特征相比,它们能显著提高检测性能。
{"title":"Detection of Diabetic Retinopathy Using Discrete Wavelet-Based Center-Symmetric Local Binary Pattern and Statistical Features.","authors":"Imtiyaz Ahmad, Vibhav Prakash Singh, Manoj Madhava Gore","doi":"10.1007/s10278-024-01243-2","DOIUrl":"https://doi.org/10.1007/s10278-024-01243-2","url":null,"abstract":"<p><p>Computer-aided diagnosis (CAD) system assists ophthalmologists in early diabetic retinopathy (DR) detection by automating the analysis of retinal images, enabling timely intervention and treatment. This paper introduces a novel CAD system based on the global and multi-resolution analysis of retinal images. As a first step, we enhance the quality of the retinal images by applying a sequence of preprocessing techniques, which include the median filter, contrast limited adaptive histogram equalization (CLAHE), and the unsharp filter. These preprocessing steps effectively eliminate noise and enhance the contrast in the retinal images. Further, these images are represented at multi-scales using discrete wavelet transform (DWT), and center symmetric local binary pattern (CSLBP) features are extracted from each scale. The extracted CSLBP features from decomposed images capture the fine and coarse details of the retinal fundus images. Also, statistical features are extracted to capture the global characteristics and provide a comprehensive representation of retinal fundus images. The detection performances of these features are evaluated on a benchmark dataset using two machine learning models, i.e., SVM and k-NN, and found that the performance of the proposed work is considerably more encouraging than other existing methods. Furthermore, the results demonstrate that when wavelet-based CSLBP features are combined with statistical features, they yield notably improved detection performance compared to using these features individually.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142142303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of Three-Dimension Chemical Exchange Saturation Transfer MRI for Predicting Tumor and Node Staging in Rectal Adenocarcinoma: An Exploration of Optimal ROI Measurement. 三维化学交换饱和转移磁共振成像预测直肠腺癌肿瘤和结节分期的可行性:最佳 ROI 测量的探索
Pub Date : 2024-09-05 DOI: 10.1007/s10278-024-01029-6
Xiao Wang, Wenguang Liu, Ismail Bilal Masokano, Weiyin Vivian Liu, Yigang Pei, Wenzheng Li

To investigate the feasibility of predicting rectal adenocarcinoma (RA) tumor (T) and node (N) staging from an optimal ROI measurement using amide proton transfer weighted-signal intensity (APTw-SI) and magnetization transfer (MT) derived from three-dimensional chemical exchange saturation transfer(3D-CEST). Fifty-eight RA patients with pathological TN staging underwent 3D-CEST and DWI. APTw-SI, MT, and ADC values were measured using three ROI approaches (ss-ROI, ts-ROI, and wt-ROI) to analyze the TN staging (T staging, T1-2 vs T3-4; N staging, N - vs N +); the reproducibility of APTw-SI and MT was also evaluated. The AUC was used to assess the staging performance and determine the optimal ROI strategy. MT and APTw-SI yielded good excellent reproducibility with three ROIs, respectively. Significant differences in MT were observed (all P < 0.05) from various ROIs but not in APTw-SI and ADC (all P > 0.05) in the TN stage. AUCs of MT from ss-ROI were 0.860 (95% CI, 0.743-0.937) and 0.852 (95% CI, 0.735-0.932) for predicting T and N staging, which is similar to ts-ROI (T staging, 0.856 [95% CI, 0.739-0.934]; N staging, 0.831 [95% CI, 0.710-0.917]) and wt-ROI (T staging, 0.833 [95% CI, 0.712-0.918]; N staging, 0.848 [95% CI, 0.729-0.929]) (all P > 0.05). MT value of 3D-CEST has excellent TN staging predictive performance in RA patients with all three kinds of ROI methods. The ss-ROI is easy to operate and could be served as the preferred ROI approach for clinical and research applications of 3D-CEST imaging.

研究利用酰胺质子转移加权信号强度(APTw-SI)和三维化学交换饱和转移(3D-CEST)产生的磁化转移(MT),通过最佳ROI测量预测直肠腺癌(RA)肿瘤(T)和结节(N)分期的可行性。58例病理TN分期的RA患者接受了3D-CEST和DWI检查。使用三种ROI方法(ss-ROI、ts-ROI和wt-ROI)测量了APTw-SI、MT和ADC值,以分析TN分期(T分期,T1-2 vs T3-4;N分期,N - vs N +);还评估了APTw-SI和MT的可重复性。AUC用于评估分期效果并确定最佳ROI策略。MT 和 APTw-SI 分别通过三个 ROI 获得了良好的再现性。在 TN 阶段观察到 MT 的显著差异(均为 P 0.05)。ss-ROI的MT预测T和N分期的AUC分别为0.860(95% CI,0.743-0.937)和0.852(95% CI,0.735-0.932),与ts-ROI相似(T分期,0.856 [95% CI,0.739-0.934];N 分期,0.831 [95% CI,0.710-0.917])和 wt-ROI(T 分期,0.833 [95% CI,0.712-0.918];N 分期,0.848 [95% CI,0.729-0.929])相似(均 P >0.05)。在所有三种ROI方法中,3D-CEST的MT值对RA患者的TN分期预测效果都很好。ss-ROI操作简便,可作为3D-CEST成像临床和研究应用的首选ROI方法。
{"title":"Feasibility of Three-Dimension Chemical Exchange Saturation Transfer MRI for Predicting Tumor and Node Staging in Rectal Adenocarcinoma: An Exploration of Optimal ROI Measurement.","authors":"Xiao Wang, Wenguang Liu, Ismail Bilal Masokano, Weiyin Vivian Liu, Yigang Pei, Wenzheng Li","doi":"10.1007/s10278-024-01029-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01029-6","url":null,"abstract":"<p><p>To investigate the feasibility of predicting rectal adenocarcinoma (RA) tumor (T) and node (N) staging from an optimal ROI measurement using amide proton transfer weighted-signal intensity (APTw-SI) and magnetization transfer (MT) derived from three-dimensional chemical exchange saturation transfer(3D-CEST). Fifty-eight RA patients with pathological TN staging underwent 3D-CEST and DWI. APTw-SI, MT, and ADC values were measured using three ROI approaches (ss-ROI, ts-ROI, and wt-ROI) to analyze the TN staging (T staging, T1-2 vs T3-4; N staging, N - vs N +); the reproducibility of APTw-SI and MT was also evaluated. The AUC was used to assess the staging performance and determine the optimal ROI strategy. MT and APTw-SI yielded good excellent reproducibility with three ROIs, respectively. Significant differences in MT were observed (all P < 0.05) from various ROIs but not in APTw-SI and ADC (all P > 0.05) in the TN stage. AUCs of MT from ss-ROI were 0.860 (95% CI, 0.743-0.937) and 0.852 (95% CI, 0.735-0.932) for predicting T and N staging, which is similar to ts-ROI (T staging, 0.856 [95% CI, 0.739-0.934]; N staging, 0.831 [95% CI, 0.710-0.917]) and wt-ROI (T staging, 0.833 [95% CI, 0.712-0.918]; N staging, 0.848 [95% CI, 0.729-0.929]) (all P > 0.05). MT value of 3D-CEST has excellent TN staging predictive performance in RA patients with all three kinds of ROI methods. The ss-ROI is easy to operate and could be served as the preferred ROI approach for clinical and research applications of 3D-CEST imaging.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142142304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Spatial Transformer for Segmenting Pancreas Abnormalities. 用于胰腺异常分割的优化空间变换器
Pub Date : 2024-09-04 DOI: 10.1007/s10278-024-01224-5
Banavathu Sridevi, B John Jaidhan

The precise delineation of the pancreas from clinical images poses a substantial obstacle in the realm of medical image analysis and surgical procedures. Challenges arise from the complexities of clinical image analysis and complications in clinical practice related to the pancreas. To tackle these challenges, a novel approach called the Spatial Horned Lizard Attention Approach (SHLAM) has been developed. As a result, a preprocessing function has been developed to examine and eliminate noise barriers from the trained MRI data. Furthermore, an assessment of the current attributes is conducted, followed by the identification of essential elements for forecasting the impacted region. Once the affected region has been identified, the images undergo segmentation. Furthermore, it is crucial to emphasize that the present study assigns 80% of the data for training and 20% for testing purposes. The optimal parameters were assessed based on precision, accuracy, recall, F-measure, error rate, Dice, and Jaccard. The performance improvement has been demonstrated by validating the method on various existing models. The SHLAM method proposed demonstrated an accuracy rate of 99.6%, surpassing that of all alternative methods.

从临床图像中精确划分胰腺是医学图像分析和外科手术领域的一大障碍。临床图像分析的复杂性和临床实践中与胰腺有关的并发症带来了挑战。为了应对这些挑战,我们开发了一种名为 "空间角蜥蜴注意法"(SHLAM)的新方法。因此,我们开发了一种预处理功能,用于检查和消除训练磁共振成像数据中的噪声障碍。此外,还对当前属性进行了评估,随后确定了预测受影响区域的基本要素。一旦确定了受影响区域,就会对图像进行分割。此外,需要强调的是,本研究将 80% 的数据用于训练,20% 用于测试。根据精确度、准确度、召回率、F-measure、错误率、Dice 和 Jaccard 对最佳参数进行了评估。通过在各种现有模型上验证该方法,证明了其性能的提高。所提出的 SHLAM 方法的准确率高达 99.6%,超过了所有其他方法。
{"title":"Optimized Spatial Transformer for Segmenting Pancreas Abnormalities.","authors":"Banavathu Sridevi, B John Jaidhan","doi":"10.1007/s10278-024-01224-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01224-5","url":null,"abstract":"<p><p>The precise delineation of the pancreas from clinical images poses a substantial obstacle in the realm of medical image analysis and surgical procedures. Challenges arise from the complexities of clinical image analysis and complications in clinical practice related to the pancreas. To tackle these challenges, a novel approach called the Spatial Horned Lizard Attention Approach (SHLAM) has been developed. As a result, a preprocessing function has been developed to examine and eliminate noise barriers from the trained MRI data. Furthermore, an assessment of the current attributes is conducted, followed by the identification of essential elements for forecasting the impacted region. Once the affected region has been identified, the images undergo segmentation. Furthermore, it is crucial to emphasize that the present study assigns 80% of the data for training and 20% for testing purposes. The optimal parameters were assessed based on precision, accuracy, recall, F-measure, error rate, Dice, and Jaccard. The performance improvement has been demonstrated by validating the method on various existing models. The SHLAM method proposed demonstrated an accuracy rate of 99.6%, surpassing that of all alternative methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Certified Imaging Informatics Professionals (CIIP) Demonstrate Value to the Healthcare Industry and Focus on Quality Through the ABII 10-Year Requirements Practice Option. 更正:认证影像信息学专业人员 (CIIP) 通过 ABII 10 年要求实践选项,展示对医疗保健行业的价值和对质量的关注。
Pub Date : 2024-09-04 DOI: 10.1007/s10278-024-01246-z
Ameena Elahi, Nikki Fennell, Liana Watson
{"title":"Correction: Certified Imaging Informatics Professionals (CIIP) Demonstrate Value to the Healthcare Industry and Focus on Quality Through the ABII 10-Year Requirements Practice Option.","authors":"Ameena Elahi, Nikki Fennell, Liana Watson","doi":"10.1007/s10278-024-01246-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01246-z","url":null,"abstract":"","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-Based vs. Deep-Learning Fusion Methods for the In Vivo Detection of Radiation Dermatitis Using Optical Coherence Tomography, a Feasibility Study. 基于特征的深度学习融合方法与利用光学相干断层扫描技术活体检测放射性皮炎的可行性研究》。
Pub Date : 2024-09-04 DOI: 10.1007/s10278-024-01241-4
Christos Photiou, Constantina Cloconi, Iosif Strouthos

Acute radiation dermatitis (ARD) is a common and distressing issue for cancer patients undergoing radiation therapy, leading to significant morbidity. Despite available treatments, ARD remains a distressing issue, necessitating further research to improve prevention and management strategies. Moreover, the lack of biomarkers for early quantitative assessment of ARD impedes progress in this area. This study aims to investigate the detection of ARD using intensity-based and novel features of Optical Coherence Tomography (OCT) images, combined with machine learning. Imaging sessions were conducted twice weekly on twenty-two patients at six neck locations throughout their radiation treatment, with ARD severity graded by an expert oncologist. We compared a traditional feature-based machine learning technique with a deep learning late-fusion approach to classify normal skin vs. ARD using a dataset of 1487 images. The dataset analysis demonstrates that the deep learning approach outperformed traditional machine learning, achieving an accuracy of 88%. These findings offer a promising foundation for future research aimed at developing a quantitative assessment tool to enhance the management of ARD.

急性放射性皮炎(ARD)是接受放射治疗的癌症患者的常见困扰,会导致严重的发病率。尽管已有治疗方法,ARD 仍是一个令人苦恼的问题,需要进一步研究以改进预防和管理策略。此外,缺乏用于早期定量评估 ARD 的生物标志物也阻碍了这一领域的研究进展。本研究旨在利用基于强度的光学相干断层扫描(OCT)图像的新特征,结合机器学习,研究如何检测 ARD。在整个放疗过程中,对 22 名患者的六个颈部位置每周进行两次成像,由肿瘤专家对 ARD 的严重程度进行分级。我们比较了传统的基于特征的机器学习技术和深度学习后期融合方法,利用 1487 张图像的数据集对正常皮肤和 ARD 进行了分类。数据集分析表明,深度学习方法优于传统的机器学习方法,准确率达到 88%。这些发现为今后旨在开发定量评估工具以加强 ARD 管理的研究奠定了良好的基础。
{"title":"Feature-Based vs. Deep-Learning Fusion Methods for the In Vivo Detection of Radiation Dermatitis Using Optical Coherence Tomography, a Feasibility Study.","authors":"Christos Photiou, Constantina Cloconi, Iosif Strouthos","doi":"10.1007/s10278-024-01241-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01241-4","url":null,"abstract":"<p><p>Acute radiation dermatitis (ARD) is a common and distressing issue for cancer patients undergoing radiation therapy, leading to significant morbidity. Despite available treatments, ARD remains a distressing issue, necessitating further research to improve prevention and management strategies. Moreover, the lack of biomarkers for early quantitative assessment of ARD impedes progress in this area. This study aims to investigate the detection of ARD using intensity-based and novel features of Optical Coherence Tomography (OCT) images, combined with machine learning. Imaging sessions were conducted twice weekly on twenty-two patients at six neck locations throughout their radiation treatment, with ARD severity graded by an expert oncologist. We compared a traditional feature-based machine learning technique with a deep learning late-fusion approach to classify normal skin vs. ARD using a dataset of 1487 images. The dataset analysis demonstrates that the deep learning approach outperformed traditional machine learning, achieving an accuracy of 88%. These findings offer a promising foundation for future research aimed at developing a quantitative assessment tool to enhance the management of ARD.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Annotation Process in Computational Pathology: A Pilot Study with Manual and Semi-automated Approaches on Consumer and Medical Grade Devices. 改进计算病理学的注释过程:在消费级和医疗级设备上采用手动和半自动方法的试点研究。
Pub Date : 2024-09-04 DOI: 10.1007/s10278-024-01248-x
Giorgio Cazzaniga, Fabio Del Carro, Albino Eccher, Jan Ulrich Becker, Giovanni Gambaro, Mattia Rossi, Federico Pieruzzi, Filippo Fraggetta, Fabio Pagni, Vincenzo L'Imperio

The development of reliable artificial intelligence (AI) algorithms in pathology often depends on ground truth provided by annotation of whole slide images (WSI), a time-consuming and operator-dependent process. A comparative analysis of different annotation approaches is performed to streamline this process. Two pathologists annotated renal tissue using semi-automated (Segment Anything Model, SAM)) and manual devices (touchpad vs mouse). A comparison was conducted in terms of working time, reproducibility (overlap fraction), and precision (0 to 10 accuracy rated by two expert nephropathologists) among different methods and operators. The impact of different displays on mouse performance was evaluated. Annotations focused on three tissue compartments: tubules (57 annotations), glomeruli (53 annotations), and arteries (58 annotations). The semi-automatic approach was the fastest and had the least inter-observer variability, averaging 13.6 ± 0.2 min with a difference (Δ) of 2%, followed by the mouse (29.9 ± 10.2, Δ = 24%), and the touchpad (47.5 ± 19.6 min, Δ = 45%). The highest reproducibility in tubules and glomeruli was achieved with SAM (overlap values of 1 and 0.99 compared to 0.97 for the mouse and 0.94 and 0.93 for the touchpad), though SAM had lower reproducibility in arteries (overlap value of 0.89 compared to 0.94 for both the mouse and touchpad). No precision differences were observed between operators (p = 0.59). Using non-medical monitors increased annotation times by 6.1%. The future employment of semi-automated and AI-assisted approaches can significantly speed up the annotation process, improving the ground truth for AI tool development.

在病理学领域开发可靠的人工智能(AI)算法通常依赖于全切片图像(WSI)标注所提供的基本事实,这是一个耗时且依赖于操作者的过程。为了简化这一过程,我们对不同的注释方法进行了比较分析。两位病理学家分别使用半自动(Segment Anything Model,SAM)和手动设备(触摸板与鼠标)对肾脏组织进行标注。)比较了不同方法和操作者的工作时间、可重复性(重叠部分)和精确度(由两位肾病病理专家评定的 0 到 10 的精确度)。还评估了不同显示方式对小鼠性能的影响。注释主要集中在三个组织区划:肾小管(57 个注释)、肾小球(53 个注释)和动脉(58 个注释)。半自动方法速度最快,观察者之间的差异最小,平均为 13.6 ± 0.2 分钟,差异 (Δ) 为 2%;其次是鼠标(29.9 ± 10.2 分钟,Δ = 24%)和触摸板(47.5 ± 19.6 分钟,Δ = 45%)。在肾小管和肾小球方面,SAM 的重现性最高(重叠值分别为 1 和 0.99,而小鼠为 0.97,触摸板为 0.94 和 0.93),但在动脉方面,SAM 的重现性较低(重叠值为 0.89,而小鼠和触摸板均为 0.94)。操作者之间未发现精度差异(p = 0.59)。使用非医疗监视器使标注时间增加了 6.1%。未来采用半自动化和人工智能辅助方法可以大大加快注释过程,为人工智能工具的开发提供更多的基础数据。
{"title":"Improving the Annotation Process in Computational Pathology: A Pilot Study with Manual and Semi-automated Approaches on Consumer and Medical Grade Devices.","authors":"Giorgio Cazzaniga, Fabio Del Carro, Albino Eccher, Jan Ulrich Becker, Giovanni Gambaro, Mattia Rossi, Federico Pieruzzi, Filippo Fraggetta, Fabio Pagni, Vincenzo L'Imperio","doi":"10.1007/s10278-024-01248-x","DOIUrl":"https://doi.org/10.1007/s10278-024-01248-x","url":null,"abstract":"<p><p>The development of reliable artificial intelligence (AI) algorithms in pathology often depends on ground truth provided by annotation of whole slide images (WSI), a time-consuming and operator-dependent process. A comparative analysis of different annotation approaches is performed to streamline this process. Two pathologists annotated renal tissue using semi-automated (Segment Anything Model, SAM)) and manual devices (touchpad vs mouse). A comparison was conducted in terms of working time, reproducibility (overlap fraction), and precision (0 to 10 accuracy rated by two expert nephropathologists) among different methods and operators. The impact of different displays on mouse performance was evaluated. Annotations focused on three tissue compartments: tubules (57 annotations), glomeruli (53 annotations), and arteries (58 annotations). The semi-automatic approach was the fastest and had the least inter-observer variability, averaging 13.6 ± 0.2 min with a difference (Δ) of 2%, followed by the mouse (29.9 ± 10.2, Δ = 24%), and the touchpad (47.5 ± 19.6 min, Δ = 45%). The highest reproducibility in tubules and glomeruli was achieved with SAM (overlap values of 1 and 0.99 compared to 0.97 for the mouse and 0.94 and 0.93 for the touchpad), though SAM had lower reproducibility in arteries (overlap value of 0.89 compared to 0.94 for both the mouse and touchpad). No precision differences were observed between operators (p = 0.59). Using non-medical monitors increased annotation times by 6.1%. The future employment of semi-automated and AI-assisted approaches can significantly speed up the annotation process, improving the ground truth for AI tool development.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1