首页 > 最新文献

International Journal of Biomedical Imaging最新文献

英文 中文
Content-Based Image Retrieval Using Colour, Gray, Advanced Texture, Shape Features, and Random Forest Classifier with Optimized Particle Swarm Optimization 基于颜色、灰度、高级纹理、形状特征和具有优化粒子群优化的随机森林分类器的内容图像检索
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2022-04-21 DOI: 10.1155/2022/3211793
Dr. MANOHARAN SUBRAMANIAN, Velmurugan Lingamuthu, Chandran Venkatesan, S. Perumal
In this paper, a new approach for Content-Based Image Retrieval (CBIR) has been addressed by extracting colour, gray, advanced texture, and shape features for input query images. Contour-based shape feature extraction methods and image moment extraction techniques are used to extract the shape features and shape invariant features. The informative features are selected from extracted features and combined colour, gray, texture, and shape features by using PSO. The target image has been retrieved for the given query image by training the random forest classifier. The proposed colour, gray, advanced texture, shape feature, and random forest classifier with optimized PSO (CGATSFRFOPSO) provide efficient retrieval of images in a large-scale database. The main objective of this research work is to improve the efficiency and effectiveness of the CBIR system by extracting the features like colour, gray, texture, and shape from database images and query images. These extracted features are processed in various levels like removing redundancy by optimal feature selection and fusion by optimal weighted linear combination. The Particle Swarm Optimization algorithm is used for selecting the informative features from gray and colour and texture features. The matching accuracy and the speed of image retrieval are improved by an ensemble of machine learning algorithms for the similarity search.
本文提出了一种基于内容的图像检索(CBIR)的新方法,即提取输入查询图像的颜色、灰度、高级纹理和形状特征。采用基于轮廓的形状特征提取方法和图像矩提取技术提取形状特征和形状不变特征。利用粒子群算法从提取的特征中选择信息特征,并结合颜色、灰度、纹理和形状特征。通过训练随机森林分类器,对给定的查询图像检索到目标图像。提出的颜色、灰度、高级纹理、形状特征和随机森林分类器与优化的粒子群算法(CGATSFRFOPSO)提供了大规模数据库中图像的高效检索。本研究的主要目的是通过从数据库图像和查询图像中提取颜色、灰度、纹理、形状等特征,提高CBIR系统的效率和有效性。对提取的特征进行最优特征选择去除冗余和最优加权线性组合融合等不同层次的处理。采用粒子群算法从灰度、颜色和纹理特征中选择信息特征。通过对相似度搜索的机器学习算法的集成,提高了匹配精度和图像检索速度。
{"title":"Content-Based Image Retrieval Using Colour, Gray, Advanced Texture, Shape Features, and Random Forest Classifier with Optimized Particle Swarm Optimization","authors":"Dr. MANOHARAN SUBRAMANIAN, Velmurugan Lingamuthu, Chandran Venkatesan, S. Perumal","doi":"10.1155/2022/3211793","DOIUrl":"https://doi.org/10.1155/2022/3211793","url":null,"abstract":"In this paper, a new approach for Content-Based Image Retrieval (CBIR) has been addressed by extracting colour, gray, advanced texture, and shape features for input query images. Contour-based shape feature extraction methods and image moment extraction techniques are used to extract the shape features and shape invariant features. The informative features are selected from extracted features and combined colour, gray, texture, and shape features by using PSO. The target image has been retrieved for the given query image by training the random forest classifier. The proposed colour, gray, advanced texture, shape feature, and random forest classifier with optimized PSO (CGATSFRFOPSO) provide efficient retrieval of images in a large-scale database. The main objective of this research work is to improve the efficiency and effectiveness of the CBIR system by extracting the features like colour, gray, texture, and shape from database images and query images. These extracted features are processed in various levels like removing redundancy by optimal feature selection and fusion by optimal weighted linear combination. The Particle Swarm Optimization algorithm is used for selecting the informative features from gray and colour and texture features. The matching accuracy and the speed of image retrieval are improved by an ensemble of machine learning algorithms for the similarity search.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2022 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44197574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Modified Gray-Level Haralick Texture Features for Early Detection of Diabetes Mellitus and High Cholesterol with Iris Image 改进灰度Haralick纹理特征用于虹膜图像早期检测糖尿病和高胆固醇
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2022-04-20 DOI: 10.1155/2022/5336373
R. K. Hapsari, Miswanto, R. Rulaningtyas, H. Suprajitno, H. Gan
Iris has specific advantages, which can record all organ conditions, body construction, and psychological disorders. Traces related to the intensity or deviation of organs caused by the disease are recorded systematically and patterned on the iris and its surroundings. The pattern that appears on the iris can be recognized by using image processing techniques. Based on the pattern in the iris image, this paper aims to provide an alternative noninvasive method for the early detection of DM and HC. In this paper, we perform detection based on iris images for two diseases, DM and HC simultaneously, by developing the invariant Haralick feature on quantized images with 256, 128, 64, 32, and 16 gray levels. The feature extraction process does early detection based on iris images. Researchers and scientists have introduced many methods, one of which is the feature extraction of the gray-level co-occurrence matrix (GLCM). Early detection based on the iris is done using the volumetric GLCM development, namely, 3D-GLCM. Based on 3D-GLCM, which is formed at a distance of d = 1 and in the direction of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°, it is used to calculate Haralick features and develop Haralick features which are invariant to the number of quantization gray levels. The test results show that the invariant feature with a gray level of 256 has the best identification performance. In dataset I, the accuracy value is 97.92, precision is 96.88, and recall is 95.83, while in dataset II, the accuracy value is 95.83, precision is 89.69, and recall is 91.67. The identification of DM and HC trained on invariant features showed higher accuracy than the original features.
虹膜具有特定的优势,可以记录所有器官状况、身体结构和心理障碍。与疾病引起的器官强度或偏差有关的痕迹被系统地记录下来,并在虹膜及其周围形成图案。虹膜上出现的图案可以通过使用图像处理技术来识别。基于虹膜图像中的模式,本文旨在为糖尿病和HC的早期检测提供一种替代的非侵入性方法。在本文中,我们通过在具有256、128、64、32和16灰度级的量化图像上发展不变的Haralick特征,同时基于虹膜图像对DM和HC这两种疾病进行检测。特征提取过程基于虹膜图像进行早期检测。研究人员和科学家已经介绍了许多方法,其中之一是灰度共生矩阵(GLCM)的特征提取。基于虹膜的早期检测是使用体积GLCM开发完成的,即3D-GLCM。基于距离d=1、方向为0°、45°、90°、135°、180°、225°、270°和315°的3D-GLCM,它被用来计算Haralick特征,并发展出对量化灰度级数量不变的Haralick特性。测试结果表明,灰度为256的不变特征具有最好的识别性能。在数据集I中,准确度值为97.92,准确度为96.88,召回率为95.83;而在数据集II中,准确率值为95.83,准确度89.69,召回率91.67。在不变特征上训练的DM和HC的识别显示出比原始特征更高的精度。
{"title":"Modified Gray-Level Haralick Texture Features for Early Detection of Diabetes Mellitus and High Cholesterol with Iris Image","authors":"R. K. Hapsari, Miswanto, R. Rulaningtyas, H. Suprajitno, H. Gan","doi":"10.1155/2022/5336373","DOIUrl":"https://doi.org/10.1155/2022/5336373","url":null,"abstract":"Iris has specific advantages, which can record all organ conditions, body construction, and psychological disorders. Traces related to the intensity or deviation of organs caused by the disease are recorded systematically and patterned on the iris and its surroundings. The pattern that appears on the iris can be recognized by using image processing techniques. Based on the pattern in the iris image, this paper aims to provide an alternative noninvasive method for the early detection of DM and HC. In this paper, we perform detection based on iris images for two diseases, DM and HC simultaneously, by developing the invariant Haralick feature on quantized images with 256, 128, 64, 32, and 16 gray levels. The feature extraction process does early detection based on iris images. Researchers and scientists have introduced many methods, one of which is the feature extraction of the gray-level co-occurrence matrix (GLCM). Early detection based on the iris is done using the volumetric GLCM development, namely, 3D-GLCM. Based on 3D-GLCM, which is formed at a distance of d = 1 and in the direction of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°, it is used to calculate Haralick features and develop Haralick features which are invariant to the number of quantization gray levels. The test results show that the invariant feature with a gray level of 256 has the best identification performance. In dataset I, the accuracy value is 97.92, precision is 96.88, and recall is 95.83, while in dataset II, the accuracy value is 95.83, precision is 89.69, and recall is 91.67. The identification of DM and HC trained on invariant features showed higher accuracy than the original features.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44414554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Value CMR: Towards a Comprehensive, Rapid, Cost-Effective Cardiovascular Magnetic Resonance Imaging. 价值CMR:迈向全面、快速、高性价比的心血管磁共振成像。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2021-01-01 DOI: 10.1155/2021/8851958
El-Sayed H Ibrahim, Luba Frank, Dhiraj Baruah, V Emre Arpinar, Andrew S Nencka, Kevin M Koch, L Tugan Muftuler, Orhan Unal, Jadranka Stojanovska, Jason C Rubenstein, Sherry-Ann Brown, John Charlson, Elizabeth M Gore, Carmen Bergom

Cardiac magnetic resonance imaging (CMR) is considered the gold standard for measuring cardiac function. Further, in a single CMR exam, information about cardiac structure, tissue composition, and blood flow could be obtained. Nevertheless, CMR is underutilized due to long scanning times, the need for multiple breath-holds, use of a contrast agent, and relatively high cost. In this work, we propose a rapid, comprehensive, contrast-free CMR exam that does not require repeated breath-holds, based on recent developments in imaging sequences. Time-consuming conventional sequences have been replaced by advanced sequences in the proposed CMR exam. Specifically, conventional 2D cine and phase-contrast (PC) sequences have been replaced by optimized 3D-cine and 4D-flow sequences, respectively. Furthermore, conventional myocardial tagging has been replaced by fast strain-encoding (SENC) imaging. Finally, T1 and T2 mapping sequences are included in the proposed exam, which allows for myocardial tissue characterization. The proposed rapid exam has been tested in vivo. The proposed exam reduced the scan time from >1 hour with conventional sequences to <20 minutes. Corresponding cardiovascular measurements from the proposed rapid CMR exam showed good agreement with those from conventional sequences and showed that they can differentiate between healthy volunteers and patients. Compared to 2D cine imaging that requires 12-16 separate breath-holds, the implemented 3D-cine sequence allows for whole heart coverage in 1-2 breath-holds. The 4D-flow sequence allows for whole-chest coverage in less than 10 minutes. Finally, SENC imaging reduces scan time to only one slice per heartbeat. In conclusion, the proposed rapid, contrast-free, and comprehensive cardiovascular exam does not require repeated breath-holds or to be supervised by a cardiac imager. These improvements make it tolerable by patients and would help improve cost effectiveness of CMR and increase its adoption in clinical practice.

心脏磁共振成像(CMR)被认为是测量心功能的金标准。此外,在单次CMR检查中,可以获得有关心脏结构、组织组成和血流的信息。然而,由于扫描时间长、需要多次屏气、使用造影剂以及相对较高的成本,CMR尚未得到充分利用。在这项工作中,我们提出了一种快速,全面,无对比度的CMR检查,不需要重复屏气,基于成像序列的最新发展。在拟议的CMR考试中,耗时的传统序列已被高级序列所取代。具体来说,传统的2D电影和相衬(PC)序列分别被优化的3d电影和4d流序列所取代。此外,传统的心肌标记已被快速菌株编码(SENC)成像所取代。最后,T1和T2定位序列包括在拟议的检查中,这允许心肌组织表征。提出的快速检查已经在体内进行了测试。该检测方法将扫描时间从传统序列的>1小时缩短到
{"title":"Value CMR: Towards a Comprehensive, Rapid, Cost-Effective Cardiovascular Magnetic Resonance Imaging.","authors":"El-Sayed H Ibrahim,&nbsp;Luba Frank,&nbsp;Dhiraj Baruah,&nbsp;V Emre Arpinar,&nbsp;Andrew S Nencka,&nbsp;Kevin M Koch,&nbsp;L Tugan Muftuler,&nbsp;Orhan Unal,&nbsp;Jadranka Stojanovska,&nbsp;Jason C Rubenstein,&nbsp;Sherry-Ann Brown,&nbsp;John Charlson,&nbsp;Elizabeth M Gore,&nbsp;Carmen Bergom","doi":"10.1155/2021/8851958","DOIUrl":"https://doi.org/10.1155/2021/8851958","url":null,"abstract":"<p><p>Cardiac magnetic resonance imaging (CMR) is considered the gold standard for measuring cardiac function. Further, in a single CMR exam, information about cardiac structure, tissue composition, and blood flow could be obtained. Nevertheless, CMR is underutilized due to long scanning times, the need for multiple breath-holds, use of a contrast agent, and relatively high cost. In this work, we propose a rapid, comprehensive, contrast-free CMR exam that does not require repeated breath-holds, based on recent developments in imaging sequences. Time-consuming conventional sequences have been replaced by advanced sequences in the proposed CMR exam. Specifically, conventional 2D cine and phase-contrast (PC) sequences have been replaced by optimized 3D-cine and 4D-flow sequences, respectively. Furthermore, conventional myocardial tagging has been replaced by fast strain-encoding (SENC) imaging. Finally, T1 and T2 mapping sequences are included in the proposed exam, which allows for myocardial tissue characterization. The proposed rapid exam has been tested in vivo. The proposed exam reduced the scan time from >1 hour with conventional sequences to <20 minutes. Corresponding cardiovascular measurements from the proposed rapid CMR exam showed good agreement with those from conventional sequences and showed that they can differentiate between healthy volunteers and patients. Compared to 2D cine imaging that requires 12-16 separate breath-holds, the implemented 3D-cine sequence allows for whole heart coverage in 1-2 breath-holds. The 4D-flow sequence allows for whole-chest coverage in less than 10 minutes. Finally, SENC imaging reduces scan time to only one slice per heartbeat. In conclusion, the proposed rapid, contrast-free, and comprehensive cardiovascular exam does not require repeated breath-holds or to be supervised by a cardiac imager. These improvements make it tolerable by patients and would help improve cost effectiveness of CMR and increase its adoption in clinical practice.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2021 ","pages":"8851958"},"PeriodicalIF":7.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8147553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9653604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Three-Dimensional Imaging of Pulmonary Fibrotic Foci at the Alveolar Scale Using Tissue-Clearing Treatment with Staining Techniques of Extracellular Matrix. 利用细胞外基质染色技术组织清除处理肺泡尺度肺纤维化病灶的三维成像。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2020-12-29 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8815231
Kohei Togami, Hiroaki Ozaki, Yuki Yumita, Anri Kitayama, Hitoshi Tada, Sumio Chono

Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the ClearT2 tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 μm from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 μm in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that ClearT2 tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.

特发性肺纤维化是一种进行性慢性肺部疾病,其特征是细胞外基质蛋白(包括胶原蛋白和弹性蛋白)的积累。纤维化肺的细胞外基质成像对于评价其病理状况、药物在肺病灶部位的分布及其治疗效果具有重要意义。在这项研究中,我们比较了细胞外基质染色技术和光学组织清除治疗技术,以开发肺纤维化病灶部位的三维成像方法。通过肺内给药博来霉素制备肺纤维化小鼠模型。采用荧光标记的番茄凝集素、I型胶原抗体和Col-F(胶原蛋白和弹性蛋白的荧光探针)比较完整纤维化肺中纤维化灶的影像学表现。使用ClearT2组织清除技术清除这些肺样本。用激光共聚焦显微镜对清除后的肺进行二维观察,并与肺组织切片图像进行比较。此外,将连续二维图像重构为三维图像。荧光标记的番茄凝集素不能在清除的纤维化肺中显示纤维化灶。虽然通过免疫荧光染色可以看到纤维化肺中的胶原I,但直到距离肺表面40 μm时才清晰可见胶原I。在清除后的肺组织中,Col-F染色使胶原蛋白和弹性蛋白的可见深度达到120 μm。此外,我们使用Col-F可视化清除纤维化肺的三维细胞外基质,其图像比免疫荧光染色提供更好的可视化效果。这些结果表明,在完整的纤维化肺中,ClearT2组织清除治疗联合Col-F染色是一种简单快速的成像纤维化灶的技术。本研究为细胞外基质相关疾病的各种器官成像提供了重要信息。
{"title":"Three-Dimensional Imaging of Pulmonary Fibrotic Foci at the Alveolar Scale Using Tissue-Clearing Treatment with Staining Techniques of Extracellular Matrix.","authors":"Kohei Togami,&nbsp;Hiroaki Ozaki,&nbsp;Yuki Yumita,&nbsp;Anri Kitayama,&nbsp;Hitoshi Tada,&nbsp;Sumio Chono","doi":"10.1155/2020/8815231","DOIUrl":"https://doi.org/10.1155/2020/8815231","url":null,"abstract":"<p><p>Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the Clear<sup>T2</sup> tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 <i>μ</i>m from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 <i>μ</i>m in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that Clear<sup>T2</sup> tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8815231"},"PeriodicalIF":7.6,"publicationDate":"2020-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7787752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38827591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Modified Phase Cycling Method for Complex-Valued MRI Reconstruction. 一种用于复值MRI重建的改进相位循环方法。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2020-11-18 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8846220
Wei He, Yu Zhang, Junling Ding, Linman Zhao

The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to "pre-process" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.

相位循环法是目前最先进的复值磁共振图像重建方法。然而,当它遵循实际的二维(2D)子采样笛卡尔采集时,它只在相位编码方向强制随机采样,出现了一些幅度上的伪影。提出了一种改进的方法,通过在相位循环方法中加入一维总变差(TV)正则化,在幅度分量更新之前对其进行“预处理”,从而在实际MRI子采样中去除这些伪影。此外,采用了SFISTA中使用的一种操作来更新幅值和相位图像,以获得更好的解。实验结果表明,该方法能够有效地消除环形伪影,提高图像的震级重建效果。
{"title":"A Modified Phase Cycling Method for Complex-Valued MRI Reconstruction.","authors":"Wei He,&nbsp;Yu Zhang,&nbsp;Junling Ding,&nbsp;Linman Zhao","doi":"10.1155/2020/8846220","DOIUrl":"https://doi.org/10.1155/2020/8846220","url":null,"abstract":"<p><p>The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to \"pre-process\" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8846220"},"PeriodicalIF":7.6,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8846220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38680662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ensemble Learning with Multiclassifiers on Pediatric Hand Radiograph Segmentation for Bone Age Assessment. 利用多分类器对小儿手部 X 光片进行集合学习,以评估骨龄。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2020-10-27 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8866700
Rui Liu, Yuanyuan Jia, Xiangqian He, Zhe Li, Jinhua Cai, Hao Li, Xiao Yang

In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.

在小儿骨龄自动评估(BAA)的临床实践研究中,手部X光片中物体区域的提取是一个重要环节,它直接影响到骨龄自动评估的预测准确性。但目前尚未找到完美的分割方案。本研究旨在开发一种高精度、高效率的手部 X 光片自动分割方法。我们将手部分割任务视为一个分类问题。每张图像的最佳分割阈值被视为预测目标。我们利用每张图像的归一化直方图、平均值和方差作为输入特征,基于多个分类器的集合学习来训练分类模型。数据集包括 600 张骨龄在 1 至 18 岁之间的左侧 X 光片。与传统的分割方法和最先进的 U-Net 网络相比,所提出的方法精度更高、计算量更小,平均 PSNR 为 52.43 dB,SSIM 为 0.97,DSC 为 0.97,JSI 为 0.91,更适合临床应用。此外,实验结果还验证了手部 X 光片分割可使 BAA 性能平均提高至少 13%。
{"title":"Ensemble Learning with Multiclassifiers on Pediatric Hand Radiograph Segmentation for Bone Age Assessment.","authors":"Rui Liu, Yuanyuan Jia, Xiangqian He, Zhe Li, Jinhua Cai, Hao Li, Xiao Yang","doi":"10.1155/2020/8866700","DOIUrl":"10.1155/2020/8866700","url":null,"abstract":"<p><p>In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8866700"},"PeriodicalIF":7.6,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7609149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38593312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases. 基于人工智能的胸部 X 光图像分类,将其分为 COVID-19 和其他传染病。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2020-10-06 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8889023
Arun Sharma, Sheeba Rani, Dinesh Gupta

The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.

冠状病毒病 2019(COVID-19)的持续大流行除了造成巨大的社会经济影响外,还引发了全球健康和医疗保健危机。这场危机的重大挑战之一是如何快速有效地识别和监测 COVID-19 患者,以便及时做出治疗、监测和管理决策。研究人员正在努力开发耗时较少的方法,以取代或补充基于 RT-PCR 的方法。本研究旨在利用胸部 X 光图像创建高效的深度学习模型,用于快速筛查 COVID-19 患者。我们使用公开的成人 COVID-19 患者的 PA 胸部 X 光图像来开发基于人工智能 (AI) 的 COVID-19 和其他主要传染病分类模型。为了扩大数据集规模并开发通用模型,我们对原始图像进行了 25 种不同类型的增强。此外,我们还利用迁移学习方法来训练和测试分类模型。两个表现最好的模型(每个模型都在旋转 120° 或 140° 角的 286 幅图像上进行了训练)的组合对正常图像、COVID-19 图像、非 COVID-19 图像、肺炎图像和肺结核图像的预测准确率最高。通过迁移学习方法训练的人工智能分类模型能有效地对代表所研究疾病的胸部 X 光图像进行分类。我们的方法比以前公布的方法更有效。这为基于人工智能的方法解决与 COVID-19 相关的生物医学成像分类问题迈出了一步。
{"title":"Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases.","authors":"Arun Sharma, Sheeba Rani, Dinesh Gupta","doi":"10.1155/2020/8889023","DOIUrl":"10.1155/2020/8889023","url":null,"abstract":"<p><p>The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8889023"},"PeriodicalIF":7.6,"publicationDate":"2020-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7539085/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38498557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning to Detect COVID-19 Automatically from X-Ray Images Using Convolutional Neural Networks 使用卷积神经网络从x射线图像中自动检测COVID-19的迁移学习
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2020-08-31 DOI: 10.1101/2020.08.25.20182170
Mundher Mohammed Taresh, N. Zhu, T. Ali, Asaad Shakir Hameed, Modhi Lafta Mutar
Novel coronavirus pneumonia (COVID-19) is a contagious disease that has already caused thousands of deaths and infected millions of people worldwide. Thus, all technological gadgets that allow the fast detection of COVID- 19 infection with high accuracy can offer help to healthcare professionals. This study is purposed to explore the effectiveness of artificial intelligence (AI) in the rapid and reliable detection of COVID-19 based on chest X-ray imaging. In this study, reliable pre-trained deep learning algorithms were applied to achieve the automatic detection of COVID-19-induced pneumonia from digital chest X-ray images. Moreover, the study aims to evaluate the performance of advanced neural architectures proposed for the classification of medical images over recent years. The data set used in the experiments involves 274 COVID-19 cases, 380 viral pneumonia, and 380 healthy cases, which was derived from several open sources of X-Rays, and the data available online. The confusion matrix provided a basis for testing the post-classification model. Furthermore, an open-source library PYCM was used to support the statistical parameters. The study revealed the superiority of Model vgg16 over other models applied to conduct this research where the model performed best in terms of overall scores and based-class scores. According to the research results, deep Learning with X-ray imaging is useful in the collection of critical biological markers associated with COVID-19 infection. The technique is conducive for the physicians to make a diagnosis of COVID-19 infection. Meanwhile, the high accuracy of this computer-aided diagnostic tool can significantly improve the speed and accuracy of COVID-19 diagnosis.
新型冠状病毒肺炎(新冠肺炎)是一种传染性疾病,已导致全球数千人死亡,数百万人感染。因此,所有能够高精度快速检测COVID-19感染的技术设备都可以为医疗保健专业人员提供帮助。本研究旨在探索人工智能(AI)在基于胸部X射线成像的新冠肺炎快速可靠检测中的有效性。在这项研究中,应用可靠的预先训练的深度学习算法,实现了从数字胸部X射线图像中自动检测COVID-19诱导的肺炎。此外,该研究旨在评估近年来为医学图像分类提出的先进神经结构的性能。实验中使用的数据集涉及274例新冠肺炎病例、380例病毒性肺炎病例和380例健康病例,这些数据来源于几个开放的X射线来源和在线数据。混淆矩阵为测试后分类模型提供了基础。此外,还使用了一个开源的PYCM库来支持统计参数。该研究揭示了vgg16模型与用于进行这项研究的其他模型相比的优势,在这些模型中,该模型在总分和基于班级的分数方面表现最好。根据研究结果,X射线成像的深度学习可用于收集与新冠肺炎感染相关的关键生物标志物。该技术有助于医生对新冠肺炎感染进行诊断。同时,这种计算机辅助诊断工具的高精度可以显著提高新冠肺炎诊断的速度和准确性。
{"title":"Transfer Learning to Detect COVID-19 Automatically from X-Ray Images Using Convolutional Neural Networks","authors":"Mundher Mohammed Taresh, N. Zhu, T. Ali, Asaad Shakir Hameed, Modhi Lafta Mutar","doi":"10.1101/2020.08.25.20182170","DOIUrl":"https://doi.org/10.1101/2020.08.25.20182170","url":null,"abstract":"Novel coronavirus pneumonia (COVID-19) is a contagious disease that has already caused thousands of deaths and infected millions of people worldwide. Thus, all technological gadgets that allow the fast detection of COVID- 19 infection with high accuracy can offer help to healthcare professionals. This study is purposed to explore the effectiveness of artificial intelligence (AI) in the rapid and reliable detection of COVID-19 based on chest X-ray imaging. In this study, reliable pre-trained deep learning algorithms were applied to achieve the automatic detection of COVID-19-induced pneumonia from digital chest X-ray images. Moreover, the study aims to evaluate the performance of advanced neural architectures proposed for the classification of medical images over recent years. The data set used in the experiments involves 274 COVID-19 cases, 380 viral pneumonia, and 380 healthy cases, which was derived from several open sources of X-Rays, and the data available online. The confusion matrix provided a basis for testing the post-classification model. Furthermore, an open-source library PYCM was used to support the statistical parameters. The study revealed the superiority of Model vgg16 over other models applied to conduct this research where the model performed best in terms of overall scores and based-class scores. According to the research results, deep Learning with X-ray imaging is useful in the collection of critical biological markers associated with COVID-19 infection. The technique is conducive for the physicians to make a diagnosis of COVID-19 infection. Meanwhile, the high accuracy of this computer-aided diagnostic tool can significantly improve the speed and accuracy of COVID-19 diagnosis.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2021 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43397552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
An Algorithm of l 1-Norm and l 0-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection. 有限投影CT图像重构的1- 1范数和1- 0范数正则化算法。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2020-08-28 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8873865
Xiezhang Li, Guocan Feng, Jiehua Zhu

The l 1-norm regularization has attracted attention for image reconstruction in computed tomography. The l 0-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined l 1-norm and l 0-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving l 0-norm regularization.

1.1范数正则化在计算机断层扫描图像重建中受到了广泛的关注。图像梯度的0范数提供了图像梯度稀疏度的度量。本文提出了一种新的l - 1范数和l - 0范数组合正则化模型,用于计算机断层扫描中有限投影数据的图像重建。在代数框架下,提出了一种采用硬阈值法的非单调交替方向算法来有效解决优化问题的算法。数值实验表明,通过引入0范数正则化,该算法有了很大的改进。
{"title":"An Algorithm of <i>l</i> <sub>1</sub>-Norm and <i>l</i> <sub>0</sub>-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection.","authors":"Xiezhang Li,&nbsp;Guocan Feng,&nbsp;Jiehua Zhu","doi":"10.1155/2020/8873865","DOIUrl":"https://doi.org/10.1155/2020/8873865","url":null,"abstract":"<p><p>The <i>l</i> <sub>1</sub>-norm regularization has attracted attention for image reconstruction in computed tomography. The <i>l</i> <sub>0</sub>-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined <i>l</i> <sub>1</sub>-norm and <i>l</i> <sub>0</sub>-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving <i>l</i> <sub>0</sub>-norm regularization.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8873865"},"PeriodicalIF":7.6,"publicationDate":"2020-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8873865","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38361996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings. 使用公开可用的放射科医生评审的胸部x射线图像作为训练数据的COVID-19深度学习预测模型:初步发现。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2020-08-18 eCollection Date: 2020-01-01 DOI: 10.1155/2020/8828855
Mohd Zulfaezal Che Azemin, Radhiana Hassan, Mohd Izzuddin Mohd Tamrin, Mohd Adli Md Ali

The key component in deep learning research is the availability of training data sets. With a limited number of publicly available COVID-19 chest X-ray images, the generalization and robustness of deep learning models to detect COVID-19 cases developed based on these images are questionable. We aimed to use thousands of readily available chest radiograph images with clinical findings associated with COVID-19 as a training data set, mutually exclusive from the images with confirmed COVID-19 cases, which will be used as the testing data set. We used a deep learning model based on the ResNet-101 convolutional neural network architecture, which was pretrained to recognize objects from a million of images and then retrained to detect abnormality in chest X-ray images. The performance of the model in terms of area under the receiver operating curve, sensitivity, specificity, and accuracy was 0.82, 77.3%, 71.8%, and 71.9%, respectively. The strength of this study lies in the use of labels that have a strong clinical association with COVID-19 cases and the use of mutually exclusive publicly available data for training, validation, and testing.

深度学习研究的关键部分是训练数据集的可用性。由于公开可用的COVID-19胸部x线图像数量有限,基于这些图像开发的深度学习模型检测COVID-19病例的泛化和鲁棒性值得怀疑。我们的目标是使用数千张现成的具有与COVID-19相关临床表现的胸片图像作为训练数据集,与已确诊的COVID-19病例的图像相互排斥,这些图像将用作测试数据集。我们使用了一个基于ResNet-101卷积神经网络架构的深度学习模型,该模型经过预训练,可以从一百万张图像中识别物体,然后再进行重新训练,以检测胸部x射线图像中的异常。该模型在受试者工作曲线下面积、灵敏度、特异度和准确度方面的表现分别为0.82、77.3%、71.8%和71.9%。本研究的优势在于使用与COVID-19病例具有强烈临床相关性的标签,并使用相互排斥的公开数据进行培训、验证和测试。
{"title":"COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings.","authors":"Mohd Zulfaezal Che Azemin,&nbsp;Radhiana Hassan,&nbsp;Mohd Izzuddin Mohd Tamrin,&nbsp;Mohd Adli Md Ali","doi":"10.1155/2020/8828855","DOIUrl":"https://doi.org/10.1155/2020/8828855","url":null,"abstract":"<p><p>The key component in deep learning research is the availability of training data sets. With a limited number of publicly available COVID-19 chest X-ray images, the generalization and robustness of deep learning models to detect COVID-19 cases developed based on these images are questionable. We aimed to use thousands of readily available chest radiograph images with clinical findings associated with COVID-19 as a training data set, mutually exclusive from the images with confirmed COVID-19 cases, which will be used as the testing data set. We used a deep learning model based on the ResNet-101 convolutional neural network architecture, which was pretrained to recognize objects from a million of images and then retrained to detect abnormality in chest X-ray images. The performance of the model in terms of area under the receiver operating curve, sensitivity, specificity, and accuracy was 0.82, 77.3%, 71.8%, and 71.9%, respectively. The strength of this study lies in the use of labels that have a strong clinical association with COVID-19 cases and the use of mutually exclusive publicly available data for training, validation, and testing.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8828855"},"PeriodicalIF":7.6,"publicationDate":"2020-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8828855","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38313824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
期刊
International Journal of Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1