首页 > 最新文献

International Journal of Biomedical Imaging最新文献

英文 中文
Computer-Aided Brain Tumor Diagnosis: Performance Evaluation of Deep Learner CNN Using Augmented Brain MRI. 计算机辅助脑肿瘤诊断:使用增强脑MRI的深度学习CNN的性能评估。
IF 7.6 Q1 Medicine Pub Date : 2021-06-13 eCollection Date: 2021-01-01 DOI: 10.1155/2021/5513500
Asma Naseer, Tahreem Yasir, Arifah Azhar, Tanzeela Shakeel, Kashif Zafar

Brain tumor is a deadly neurological disease caused by an abnormal and uncontrollable growth of cells inside the brain or skull. The mortality ratio of patients suffering from this disease is growing gradually. Analysing Magnetic Resonance Images (MRIs) manually is inadequate for efficient and accurate brain tumor diagnosis. An early diagnosis of the disease can activate a timely treatment consequently elevating the survival ratio of the patients. Modern brain imaging methodologies have augmented the detection ratio of brain tumor. In the past few years, a lot of research has been carried out for computer-aided diagnosis of human brain tumor to achieve 100% diagnosis accuracy. The focus of this research is on early diagnosis of brain tumor via Convolution Neural Network (CNN) to enhance state-of-the-art diagnosis accuracy. The proposed CNN is trained on a benchmark dataset, BR35H, containing brain tumor MRIs. The performance and sustainability of the model is evaluated on six different datasets, i.e., BMI-I, BTI, BMI-II, BTS, BMI-III, and BD-BT. To improve the performance of the model and to make it sustainable for totally unseen data, different geometric data augmentation techniques, along with statistical standardization, are employed. The proposed CNN-based CAD system for brain tumor diagnosis performs better than other systems by achieving an average accuracy of around 98.8% and a specificity of around 0.99. It also reveals 100% correct diagnosis for two brain MRI datasets, i.e., BTS and BD-BT. The performance of the proposed system is also compared with the other existing systems, and the analysis reveals that the proposed system outperforms all of them.

脑瘤是一种致命的神经系统疾病,由大脑或头骨内细胞的异常和不可控生长引起。这种疾病患者的死亡率正在逐渐上升。手动分析磁共振图像(MRI)不足以有效和准确地诊断脑肿瘤。疾病的早期诊断可以激活及时的治疗,从而提高患者的生存率。现代脑成像方法提高了脑肿瘤的检出率。在过去的几年里,人们对人脑肿瘤的计算机辅助诊断进行了大量的研究,以实现100%的诊断准确率。本研究的重点是通过卷积神经网络(CNN)对脑肿瘤进行早期诊断,以提高最先进的诊断准确性。所提出的CNN是在包含脑肿瘤MRI的基准数据集BR35H上训练的。模型的性能和可持续性在六个不同的数据集上进行评估,即BMI-i、BTI、BMI-II、BTS、BMI-III和BD-BT。为了提高模型的性能,并使其对完全看不见的数据具有可持续性,采用了不同的几何数据增强技术以及统计标准化。所提出的用于脑肿瘤诊断的基于CNN的CAD系统比其他系统表现更好,平均准确率约为98.8%,特异性约为0.99。它还揭示了两个大脑MRI数据集(即BTS和BD-BT)的100%正确诊断。将所提出的系统的性能与其他现有系统进行了比较,分析表明,所提出的体系优于所有现有体系。
{"title":"Computer-Aided Brain Tumor Diagnosis: Performance Evaluation of Deep Learner CNN Using Augmented Brain MRI.","authors":"Asma Naseer,&nbsp;Tahreem Yasir,&nbsp;Arifah Azhar,&nbsp;Tanzeela Shakeel,&nbsp;Kashif Zafar","doi":"10.1155/2021/5513500","DOIUrl":"10.1155/2021/5513500","url":null,"abstract":"<p><p>Brain tumor is a deadly neurological disease caused by an abnormal and uncontrollable growth of cells inside the brain or skull. The mortality ratio of patients suffering from this disease is growing gradually. Analysing Magnetic Resonance Images (MRIs) manually is inadequate for efficient and accurate brain tumor diagnosis. An early diagnosis of the disease can activate a timely treatment consequently elevating the survival ratio of the patients. Modern brain imaging methodologies have augmented the detection ratio of brain tumor. In the past few years, a lot of research has been carried out for computer-aided diagnosis of human brain tumor to achieve 100% diagnosis accuracy. The focus of this research is on early diagnosis of brain tumor via Convolution Neural Network (CNN) to enhance state-of-the-art diagnosis accuracy. The proposed CNN is trained on a benchmark dataset, BR35H, containing brain tumor MRIs. The performance and sustainability of the model is evaluated on six different datasets, i.e., BMI-I, BTI, BMI-II, BTS, BMI-III, and BD-BT. To improve the performance of the model and to make it sustainable for totally unseen data, different geometric data augmentation techniques, along with statistical standardization, are employed. The proposed CNN-based CAD system for brain tumor diagnosis performs better than other systems by achieving an average accuracy of around 98.8% and a specificity of around 0.99. It also reveals 100% correct diagnosis for two brain MRI datasets, i.e., BTS and BD-BT. The performance of the proposed system is also compared with the other existing systems, and the analysis reveals that the proposed system outperforms all of them.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8216815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39162466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Corrigendum to "Robust Diffeomorphic Mapping via Geodesically Controlled Active Shapes". “通过测地线控制的活动形状进行鲁棒微分同构映射”的勘误表。
IF 7.6 Q1 Medicine Pub Date : 2021-05-26 eCollection Date: 2021-01-01 DOI: 10.1155/2021/9780202
Daniel J Tward, Jun Ma, Michael I Miller, Laurent Younes

[This corrects the article DOI: 10.1155/2013/205494.].

[这更正了文章DOI: 10.1155/2013/205494]。
{"title":"Corrigendum to \"Robust Diffeomorphic Mapping via Geodesically Controlled Active Shapes\".","authors":"Daniel J Tward,&nbsp;Jun Ma,&nbsp;Michael I Miller,&nbsp;Laurent Younes","doi":"10.1155/2021/9780202","DOIUrl":"https://doi.org/10.1155/2021/9780202","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1155/2013/205494.].</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8179761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39239836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning to Detect COVID-19 Automatically from X-Ray Images Using Convolutional Neural Networks. 利用卷积神经网络从 X 射线图像中自动检测 COVID-19 的迁移学习。
IF 7.6 Q1 Medicine Pub Date : 2021-05-15 eCollection Date: 2021-01-01 DOI: 10.1155/2021/8828404
Mundher Mohammed Taresh, Ningbo Zhu, Talal Ahmed Ali Ali, Asaad Shakir Hameed, Modhi Lafta Mutar

The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.

新型冠状病毒病 2019(COVID-19)是一种传染性疾病,已在全球造成数千人死亡,数百万人受到感染。因此,能够高精度快速检测 COVID-19 感染的各种技术可为医护人员提供急需的帮助。本研究旨在评估最先进的预训练卷积神经网络(CNN)从胸部 X 光片(CXR)自动诊断 COVID-19 的效果。实验中使用的数据集包括 1200 张 COVID-19 患者的 CXR 图像、1345 张病毒性肺炎患者的 CXR 图像和 1341 张健康患者的 CXR 图像。本文基于不同的预训练深度学习算法,探索了人工智能(AI)从 CXR 图像中快速、精确地识别 COVID-19 的有效性,并进行了微调,以最大限度地提高检测准确性,从而找出最佳算法。结果表明,深度学习与 X 射线成像在收集与 COVID-19 感染相关的关键生物标记物方面非常有用。VGG16 和 MobileNet 获得了 98.28% 的最高准确率。然而,VGG16 在 COVID-19 检测方面的表现优于所有其他模型,准确率、F1 分数、精确度、特异性和灵敏度分别为 98.72%、97.59%、96.43%、98.70% 和 98.78%。这些预训练模型的出色表现大大提高了 COVID-19 诊断的速度和准确性。然而,在使用深度迁移学习时,要想更准确、更可靠地识别 COVID-19 感染,还需要更大的 COVID-19 X 光图像数据集。在疾病负担和预防措施需求与现有资源相冲突的情况下,这将对此次大流行极为有益。
{"title":"Transfer Learning to Detect COVID-19 Automatically from X-Ray Images Using Convolutional Neural Networks.","authors":"Mundher Mohammed Taresh, Ningbo Zhu, Talal Ahmed Ali Ali, Asaad Shakir Hameed, Modhi Lafta Mutar","doi":"10.1155/2021/8828404","DOIUrl":"10.1155/2021/8828404","url":null,"abstract":"<p><p>The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8203406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39057696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diabetic Retinopathy Detection Using Local Extrema Quantized Haralick Features with Long Short-Term Memory Network. 利用长短期记忆网络的局部极值量化哈拉利克特征检测糖尿病视网膜病变
IF 3.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2021-04-14 eCollection Date: 2021-01-01 DOI: 10.1155/2021/6618666
Abubakar M Ashir, Salisu Ibrahim, Mohammed Abdulghani, Abdullahi Abdu Ibrahim, Mohammed S Anwar

Diabetic retinopathy is one of the leading diseases affecting eyes. Lack of early detection and treatment can lead to total blindness of the diseased eyes. Recently, numerous researchers have attempted producing automatic diabetic retinopathy detection techniques to supplement diagnosis and early treatment of diabetic retinopathy symptoms. In this manuscript, a new approach has been proposed. The proposed approach utilizes the feature extracted from the fundus image using a local extrema information with quantized Haralick features. The quantized features encode not only the textural Haralick features but also exploit the multiresolution information of numerous symptoms in diabetic retinopathy. Long Short-Term Memory network together with local extrema pattern provides a probabilistic approach to analyze each segment of the image with higher precision which helps to suppress false positive occurrences. The proposed approach analyzes the retina vasculature and hard-exudate symptoms of diabetic retinopathy on two different public datasets. The experimental results evaluated using performance matrices such as specificity, accuracy, and sensitivity reveal promising indices. Similarly, comparison with the related state-of-the-art researches highlights the validity of the proposed method. The proposed approach performs better than most of the researches used for comparison.

糖尿病视网膜病变是影响眼睛的主要疾病之一。如果缺乏早期发现和治疗,患病眼睛可能会完全失明。最近,许多研究人员都在尝试开发糖尿病视网膜病变自动检测技术,以辅助诊断和早期治疗糖尿病视网膜病变症状。本手稿提出了一种新方法。该方法利用局部极值信息和量化的 Haralick 特征从眼底图像中提取特征。量化特征不仅编码了 Haralick 纹理特征,还利用了糖尿病视网膜病变众多症状的多分辨率信息。长短期记忆网络与局部极值模式相结合,提供了一种概率方法,以更高的精度分析图像的每个片段,这有助于抑制假阳性的出现。所提出的方法在两个不同的公共数据集上分析了糖尿病视网膜病变的视网膜血管和硬渗出症状。使用特异性、准确性和灵敏度等性能矩阵评估的实验结果显示了良好的指数。同样,与相关先进研究的比较也凸显了所提方法的有效性。与大多数用于比较的研究相比,所提出的方法表现更好。
{"title":"Diabetic Retinopathy Detection Using Local Extrema Quantized Haralick Features with Long Short-Term Memory Network.","authors":"Abubakar M Ashir, Salisu Ibrahim, Mohammed Abdulghani, Abdullahi Abdu Ibrahim, Mohammed S Anwar","doi":"10.1155/2021/6618666","DOIUrl":"10.1155/2021/6618666","url":null,"abstract":"<p><p>Diabetic retinopathy is one of the leading diseases affecting eyes. Lack of early detection and treatment can lead to total blindness of the diseased eyes. Recently, numerous researchers have attempted producing automatic diabetic retinopathy detection techniques to supplement diagnosis and early treatment of diabetic retinopathy symptoms. In this manuscript, a new approach has been proposed. The proposed approach utilizes the feature extracted from the fundus image using a local extrema information with quantized Haralick features. The quantized features encode not only the textural Haralick features but also exploit the multiresolution information of numerous symptoms in diabetic retinopathy. Long Short-Term Memory network together with local extrema pattern provides a probabilistic approach to analyze each segment of the image with higher precision which helps to suppress false positive occurrences. The proposed approach analyzes the retina vasculature and hard-exudate symptoms of diabetic retinopathy on two different public datasets. The experimental results evaluated using performance matrices such as specificity, accuracy, and sensitivity reveal promising indices. Similarly, comparison with the related state-of-the-art researches highlights the validity of the proposed method. The proposed approach performs better than most of the researches used for comparison.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8068542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38954089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometric Regularized Hopfield Neural Network for Medical Image Enhancement. 用于医学图像增强的几何正则化Hopfield神经网络。
IF 7.6 Q1 Medicine Pub Date : 2021-01-22 eCollection Date: 2021-01-01 DOI: 10.1155/2021/6664569
Fayadh Alenezi, K C Santosh

One of the major shortcomings of Hopfield neural network (HNN) is that the network may not always converge to a fixed point. HNN, predominantly, is limited to local optimization during training to achieve network stability. In this paper, the convergence problem is addressed using two approaches: (a) by sequencing the activation of a continuous modified HNN (MHNN) based on the geometric correlation of features within various image hyperplanes via pixel gradient vectors and (b) by regulating geometric pixel gradient vectors. These are achieved by regularizing proposed MHNNs under cohomology, which enables them to act as an unconventional filter for pixel spectral sequences. It shifts the focus to both local and global optimizations in order to strengthen feature correlations within each image subspace. As a result, it enhances edges, information content, contrast, and resolution. The proposed algorithm was tested on fifteen different medical images, where evaluations were made based on entropy, visual information fidelity (VIF), weighted peak signal-to-noise ratio (WPSNR), contrast, and homogeneity. Our results confirmed superiority as compared to four existing benchmark enhancement methods.

Hopfield神经网络(HNN)的一个主要缺点是网络可能不总是收敛到一个不动点。HNN主要是在训练过程中进行局部优化,以达到网络的稳定性。本文采用两种方法解决了收敛问题:(a)通过像素梯度向量对基于各种图像超平面内特征的几何相关性的连续修正HNN (MHNN)的激活排序;(b)通过调节几何像素梯度向量。这些是通过在上同调下正则化所提出的mhnn来实现的,这使它们能够作为像素光谱序列的非常规滤波器。它将重点转移到局部和全局优化,以加强每个图像子空间内的特征相关性。因此,它增强了边缘、信息内容、对比度和分辨率。该算法在15幅不同的医学图像上进行了测试,并根据熵、视觉信息保真度(VIF)、加权峰值信噪比(WPSNR)、对比度和均匀性进行了评估。与现有的四种基准增强方法相比,我们的结果证实了其优越性。
{"title":"Geometric Regularized Hopfield Neural Network for Medical Image Enhancement.","authors":"Fayadh Alenezi,&nbsp;K C Santosh","doi":"10.1155/2021/6664569","DOIUrl":"https://doi.org/10.1155/2021/6664569","url":null,"abstract":"<p><p>One of the major shortcomings of Hopfield neural network (HNN) is that the network may not always converge to a fixed point. HNN, predominantly, is limited to local optimization during training to achieve network stability. In this paper, the convergence problem is addressed using two approaches: (a) by sequencing the activation of a continuous modified HNN (MHNN) based on the geometric correlation of features within various image hyperplanes via pixel gradient vectors and (b) by regulating geometric pixel gradient vectors. These are achieved by regularizing proposed MHNNs under cohomology, which enables them to act as an unconventional filter for pixel spectral sequences. It shifts the focus to both local and global optimizations in order to strengthen feature correlations within each image subspace. As a result, it enhances edges, information content, contrast, and resolution. The proposed algorithm was tested on fifteen different medical images, where evaluations were made based on entropy, visual information fidelity (VIF), weighted peak signal-to-noise ratio (WPSNR), contrast, and homogeneity. Our results confirmed superiority as compared to four existing benchmark enhancement methods.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7847341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25341347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Value CMR: Towards a Comprehensive, Rapid, Cost-Effective Cardiovascular Magnetic Resonance Imaging. 价值CMR:迈向全面、快速、高性价比的心血管磁共振成像。
IF 7.6 Q1 Medicine Pub Date : 2021-01-01 DOI: 10.1155/2021/8851958
El-Sayed H Ibrahim, Luba Frank, Dhiraj Baruah, V Emre Arpinar, Andrew S Nencka, Kevin M Koch, L Tugan Muftuler, Orhan Unal, Jadranka Stojanovska, Jason C Rubenstein, Sherry-Ann Brown, John Charlson, Elizabeth M Gore, Carmen Bergom

Cardiac magnetic resonance imaging (CMR) is considered the gold standard for measuring cardiac function. Further, in a single CMR exam, information about cardiac structure, tissue composition, and blood flow could be obtained. Nevertheless, CMR is underutilized due to long scanning times, the need for multiple breath-holds, use of a contrast agent, and relatively high cost. In this work, we propose a rapid, comprehensive, contrast-free CMR exam that does not require repeated breath-holds, based on recent developments in imaging sequences. Time-consuming conventional sequences have been replaced by advanced sequences in the proposed CMR exam. Specifically, conventional 2D cine and phase-contrast (PC) sequences have been replaced by optimized 3D-cine and 4D-flow sequences, respectively. Furthermore, conventional myocardial tagging has been replaced by fast strain-encoding (SENC) imaging. Finally, T1 and T2 mapping sequences are included in the proposed exam, which allows for myocardial tissue characterization. The proposed rapid exam has been tested in vivo. The proposed exam reduced the scan time from >1 hour with conventional sequences to <20 minutes. Corresponding cardiovascular measurements from the proposed rapid CMR exam showed good agreement with those from conventional sequences and showed that they can differentiate between healthy volunteers and patients. Compared to 2D cine imaging that requires 12-16 separate breath-holds, the implemented 3D-cine sequence allows for whole heart coverage in 1-2 breath-holds. The 4D-flow sequence allows for whole-chest coverage in less than 10 minutes. Finally, SENC imaging reduces scan time to only one slice per heartbeat. In conclusion, the proposed rapid, contrast-free, and comprehensive cardiovascular exam does not require repeated breath-holds or to be supervised by a cardiac imager. These improvements make it tolerable by patients and would help improve cost effectiveness of CMR and increase its adoption in clinical practice.

心脏磁共振成像(CMR)被认为是测量心功能的金标准。此外,在单次CMR检查中,可以获得有关心脏结构、组织组成和血流的信息。然而,由于扫描时间长、需要多次屏气、使用造影剂以及相对较高的成本,CMR尚未得到充分利用。在这项工作中,我们提出了一种快速,全面,无对比度的CMR检查,不需要重复屏气,基于成像序列的最新发展。在拟议的CMR考试中,耗时的传统序列已被高级序列所取代。具体来说,传统的2D电影和相衬(PC)序列分别被优化的3d电影和4d流序列所取代。此外,传统的心肌标记已被快速菌株编码(SENC)成像所取代。最后,T1和T2定位序列包括在拟议的检查中,这允许心肌组织表征。提出的快速检查已经在体内进行了测试。该检测方法将扫描时间从传统序列的>1小时缩短到
{"title":"Value CMR: Towards a Comprehensive, Rapid, Cost-Effective Cardiovascular Magnetic Resonance Imaging.","authors":"El-Sayed H Ibrahim,&nbsp;Luba Frank,&nbsp;Dhiraj Baruah,&nbsp;V Emre Arpinar,&nbsp;Andrew S Nencka,&nbsp;Kevin M Koch,&nbsp;L Tugan Muftuler,&nbsp;Orhan Unal,&nbsp;Jadranka Stojanovska,&nbsp;Jason C Rubenstein,&nbsp;Sherry-Ann Brown,&nbsp;John Charlson,&nbsp;Elizabeth M Gore,&nbsp;Carmen Bergom","doi":"10.1155/2021/8851958","DOIUrl":"https://doi.org/10.1155/2021/8851958","url":null,"abstract":"<p><p>Cardiac magnetic resonance imaging (CMR) is considered the gold standard for measuring cardiac function. Further, in a single CMR exam, information about cardiac structure, tissue composition, and blood flow could be obtained. Nevertheless, CMR is underutilized due to long scanning times, the need for multiple breath-holds, use of a contrast agent, and relatively high cost. In this work, we propose a rapid, comprehensive, contrast-free CMR exam that does not require repeated breath-holds, based on recent developments in imaging sequences. Time-consuming conventional sequences have been replaced by advanced sequences in the proposed CMR exam. Specifically, conventional 2D cine and phase-contrast (PC) sequences have been replaced by optimized 3D-cine and 4D-flow sequences, respectively. Furthermore, conventional myocardial tagging has been replaced by fast strain-encoding (SENC) imaging. Finally, T1 and T2 mapping sequences are included in the proposed exam, which allows for myocardial tissue characterization. The proposed rapid exam has been tested in vivo. The proposed exam reduced the scan time from >1 hour with conventional sequences to <20 minutes. Corresponding cardiovascular measurements from the proposed rapid CMR exam showed good agreement with those from conventional sequences and showed that they can differentiate between healthy volunteers and patients. Compared to 2D cine imaging that requires 12-16 separate breath-holds, the implemented 3D-cine sequence allows for whole heart coverage in 1-2 breath-holds. The 4D-flow sequence allows for whole-chest coverage in less than 10 minutes. Finally, SENC imaging reduces scan time to only one slice per heartbeat. In conclusion, the proposed rapid, contrast-free, and comprehensive cardiovascular exam does not require repeated breath-holds or to be supervised by a cardiac imager. These improvements make it tolerable by patients and would help improve cost effectiveness of CMR and increase its adoption in clinical practice.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8147553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9653604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Three-Dimensional Imaging of Pulmonary Fibrotic Foci at the Alveolar Scale Using Tissue-Clearing Treatment with Staining Techniques of Extracellular Matrix. 利用细胞外基质染色技术组织清除处理肺泡尺度肺纤维化病灶的三维成像。
IF 7.6 Q1 Medicine Pub Date : 2020-12-29 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8815231
Kohei Togami, Hiroaki Ozaki, Yuki Yumita, Anri Kitayama, Hitoshi Tada, Sumio Chono

Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the ClearT2 tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 μm from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 μm in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that ClearT2 tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.

特发性肺纤维化是一种进行性慢性肺部疾病,其特征是细胞外基质蛋白(包括胶原蛋白和弹性蛋白)的积累。纤维化肺的细胞外基质成像对于评价其病理状况、药物在肺病灶部位的分布及其治疗效果具有重要意义。在这项研究中,我们比较了细胞外基质染色技术和光学组织清除治疗技术,以开发肺纤维化病灶部位的三维成像方法。通过肺内给药博来霉素制备肺纤维化小鼠模型。采用荧光标记的番茄凝集素、I型胶原抗体和Col-F(胶原蛋白和弹性蛋白的荧光探针)比较完整纤维化肺中纤维化灶的影像学表现。使用ClearT2组织清除技术清除这些肺样本。用激光共聚焦显微镜对清除后的肺进行二维观察,并与肺组织切片图像进行比较。此外,将连续二维图像重构为三维图像。荧光标记的番茄凝集素不能在清除的纤维化肺中显示纤维化灶。虽然通过免疫荧光染色可以看到纤维化肺中的胶原I,但直到距离肺表面40 μm时才清晰可见胶原I。在清除后的肺组织中,Col-F染色使胶原蛋白和弹性蛋白的可见深度达到120 μm。此外,我们使用Col-F可视化清除纤维化肺的三维细胞外基质,其图像比免疫荧光染色提供更好的可视化效果。这些结果表明,在完整的纤维化肺中,ClearT2组织清除治疗联合Col-F染色是一种简单快速的成像纤维化灶的技术。本研究为细胞外基质相关疾病的各种器官成像提供了重要信息。
{"title":"Three-Dimensional Imaging of Pulmonary Fibrotic Foci at the Alveolar Scale Using Tissue-Clearing Treatment with Staining Techniques of Extracellular Matrix.","authors":"Kohei Togami,&nbsp;Hiroaki Ozaki,&nbsp;Yuki Yumita,&nbsp;Anri Kitayama,&nbsp;Hitoshi Tada,&nbsp;Sumio Chono","doi":"10.1155/2020/8815231","DOIUrl":"https://doi.org/10.1155/2020/8815231","url":null,"abstract":"<p><p>Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the Clear<sup>T2</sup> tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 <i>μ</i>m from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 <i>μ</i>m in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that Clear<sup>T2</sup> tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7787752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38827591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Modified Phase Cycling Method for Complex-Valued MRI Reconstruction. 一种用于复值MRI重建的改进相位循环方法。
IF 7.6 Q1 Medicine Pub Date : 2020-11-18 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8846220
Wei He, Yu Zhang, Junling Ding, Linman Zhao

The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to "pre-process" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.

相位循环法是目前最先进的复值磁共振图像重建方法。然而,当它遵循实际的二维(2D)子采样笛卡尔采集时,它只在相位编码方向强制随机采样,出现了一些幅度上的伪影。提出了一种改进的方法,通过在相位循环方法中加入一维总变差(TV)正则化,在幅度分量更新之前对其进行“预处理”,从而在实际MRI子采样中去除这些伪影。此外,采用了SFISTA中使用的一种操作来更新幅值和相位图像,以获得更好的解。实验结果表明,该方法能够有效地消除环形伪影,提高图像的震级重建效果。
{"title":"A Modified Phase Cycling Method for Complex-Valued MRI Reconstruction.","authors":"Wei He,&nbsp;Yu Zhang,&nbsp;Junling Ding,&nbsp;Linman Zhao","doi":"10.1155/2020/8846220","DOIUrl":"https://doi.org/10.1155/2020/8846220","url":null,"abstract":"<p><p>The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to \"pre-process\" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8846220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38680662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ensemble Learning with Multiclassifiers on Pediatric Hand Radiograph Segmentation for Bone Age Assessment. 利用多分类器对小儿手部 X 光片进行集合学习,以评估骨龄。
IF 7.6 Q1 Medicine Pub Date : 2020-10-27 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8866700
Rui Liu, Yuanyuan Jia, Xiangqian He, Zhe Li, Jinhua Cai, Hao Li, Xiao Yang

In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.

在小儿骨龄自动评估(BAA)的临床实践研究中,手部X光片中物体区域的提取是一个重要环节,它直接影响到骨龄自动评估的预测准确性。但目前尚未找到完美的分割方案。本研究旨在开发一种高精度、高效率的手部 X 光片自动分割方法。我们将手部分割任务视为一个分类问题。每张图像的最佳分割阈值被视为预测目标。我们利用每张图像的归一化直方图、平均值和方差作为输入特征,基于多个分类器的集合学习来训练分类模型。数据集包括 600 张骨龄在 1 至 18 岁之间的左侧 X 光片。与传统的分割方法和最先进的 U-Net 网络相比,所提出的方法精度更高、计算量更小,平均 PSNR 为 52.43 dB,SSIM 为 0.97,DSC 为 0.97,JSI 为 0.91,更适合临床应用。此外,实验结果还验证了手部 X 光片分割可使 BAA 性能平均提高至少 13%。
{"title":"Ensemble Learning with Multiclassifiers on Pediatric Hand Radiograph Segmentation for Bone Age Assessment.","authors":"Rui Liu, Yuanyuan Jia, Xiangqian He, Zhe Li, Jinhua Cai, Hao Li, Xiao Yang","doi":"10.1155/2020/8866700","DOIUrl":"10.1155/2020/8866700","url":null,"abstract":"<p><p>In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7609149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38593312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases. 基于人工智能的胸部 X 光图像分类,将其分为 COVID-19 和其他传染病。
IF 7.6 Q1 Medicine Pub Date : 2020-10-06 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8889023
Arun Sharma, Sheeba Rani, Dinesh Gupta

The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.

冠状病毒病 2019(COVID-19)的持续大流行除了造成巨大的社会经济影响外,还引发了全球健康和医疗保健危机。这场危机的重大挑战之一是如何快速有效地识别和监测 COVID-19 患者,以便及时做出治疗、监测和管理决策。研究人员正在努力开发耗时较少的方法,以取代或补充基于 RT-PCR 的方法。本研究旨在利用胸部 X 光图像创建高效的深度学习模型,用于快速筛查 COVID-19 患者。我们使用公开的成人 COVID-19 患者的 PA 胸部 X 光图像来开发基于人工智能 (AI) 的 COVID-19 和其他主要传染病分类模型。为了扩大数据集规模并开发通用模型,我们对原始图像进行了 25 种不同类型的增强。此外,我们还利用迁移学习方法来训练和测试分类模型。两个表现最好的模型(每个模型都在旋转 120° 或 140° 角的 286 幅图像上进行了训练)的组合对正常图像、COVID-19 图像、非 COVID-19 图像、肺炎图像和肺结核图像的预测准确率最高。通过迁移学习方法训练的人工智能分类模型能有效地对代表所研究疾病的胸部 X 光图像进行分类。我们的方法比以前公布的方法更有效。这为基于人工智能的方法解决与 COVID-19 相关的生物医学成像分类问题迈出了一步。
{"title":"Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases.","authors":"Arun Sharma, Sheeba Rani, Dinesh Gupta","doi":"10.1155/2020/8889023","DOIUrl":"10.1155/2020/8889023","url":null,"abstract":"<p><p>The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7539085/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38498557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1