Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the ClearT2 tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 μm from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 μm in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that ClearT2 tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.
{"title":"Three-Dimensional Imaging of Pulmonary Fibrotic Foci at the Alveolar Scale Using Tissue-Clearing Treatment with Staining Techniques of Extracellular Matrix.","authors":"Kohei Togami, Hiroaki Ozaki, Yuki Yumita, Anri Kitayama, Hitoshi Tada, Sumio Chono","doi":"10.1155/2020/8815231","DOIUrl":"https://doi.org/10.1155/2020/8815231","url":null,"abstract":"<p><p>Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the Clear<sup>T2</sup> tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 <i>μ</i>m from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 <i>μ</i>m in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that Clear<sup>T2</sup> tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8815231"},"PeriodicalIF":7.6,"publicationDate":"2020-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7787752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38827591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-18eCollection Date: 2020-01-01DOI: 10.1155/2020/8846220
Wei He, Yu Zhang, Junling Ding, Linman Zhao
The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to "pre-process" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.
{"title":"A Modified Phase Cycling Method for Complex-Valued MRI Reconstruction.","authors":"Wei He, Yu Zhang, Junling Ding, Linman Zhao","doi":"10.1155/2020/8846220","DOIUrl":"https://doi.org/10.1155/2020/8846220","url":null,"abstract":"<p><p>The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to \"pre-process\" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8846220"},"PeriodicalIF":7.6,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8846220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38680662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.
在小儿骨龄自动评估(BAA)的临床实践研究中,手部X光片中物体区域的提取是一个重要环节,它直接影响到骨龄自动评估的预测准确性。但目前尚未找到完美的分割方案。本研究旨在开发一种高精度、高效率的手部 X 光片自动分割方法。我们将手部分割任务视为一个分类问题。每张图像的最佳分割阈值被视为预测目标。我们利用每张图像的归一化直方图、平均值和方差作为输入特征,基于多个分类器的集合学习来训练分类模型。数据集包括 600 张骨龄在 1 至 18 岁之间的左侧 X 光片。与传统的分割方法和最先进的 U-Net 网络相比,所提出的方法精度更高、计算量更小,平均 PSNR 为 52.43 dB,SSIM 为 0.97,DSC 为 0.97,JSI 为 0.91,更适合临床应用。此外,实验结果还验证了手部 X 光片分割可使 BAA 性能平均提高至少 13%。
{"title":"Ensemble Learning with Multiclassifiers on Pediatric Hand Radiograph Segmentation for Bone Age Assessment.","authors":"Rui Liu, Yuanyuan Jia, Xiangqian He, Zhe Li, Jinhua Cai, Hao Li, Xiao Yang","doi":"10.1155/2020/8866700","DOIUrl":"10.1155/2020/8866700","url":null,"abstract":"<p><p>In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8866700"},"PeriodicalIF":7.6,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7609149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38593312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-06eCollection Date: 2020-01-01DOI: 10.1155/2020/8889023
Arun Sharma, Sheeba Rani, Dinesh Gupta
The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.
冠状病毒病 2019(COVID-19)的持续大流行除了造成巨大的社会经济影响外,还引发了全球健康和医疗保健危机。这场危机的重大挑战之一是如何快速有效地识别和监测 COVID-19 患者,以便及时做出治疗、监测和管理决策。研究人员正在努力开发耗时较少的方法,以取代或补充基于 RT-PCR 的方法。本研究旨在利用胸部 X 光图像创建高效的深度学习模型,用于快速筛查 COVID-19 患者。我们使用公开的成人 COVID-19 患者的 PA 胸部 X 光图像来开发基于人工智能 (AI) 的 COVID-19 和其他主要传染病分类模型。为了扩大数据集规模并开发通用模型,我们对原始图像进行了 25 种不同类型的增强。此外,我们还利用迁移学习方法来训练和测试分类模型。两个表现最好的模型(每个模型都在旋转 120° 或 140° 角的 286 幅图像上进行了训练)的组合对正常图像、COVID-19 图像、非 COVID-19 图像、肺炎图像和肺结核图像的预测准确率最高。通过迁移学习方法训练的人工智能分类模型能有效地对代表所研究疾病的胸部 X 光图像进行分类。我们的方法比以前公布的方法更有效。这为基于人工智能的方法解决与 COVID-19 相关的生物医学成像分类问题迈出了一步。
{"title":"Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases.","authors":"Arun Sharma, Sheeba Rani, Dinesh Gupta","doi":"10.1155/2020/8889023","DOIUrl":"10.1155/2020/8889023","url":null,"abstract":"<p><p>The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8889023"},"PeriodicalIF":7.6,"publicationDate":"2020-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7539085/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38498557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-31DOI: 10.1101/2020.08.25.20182170
Mundher Mohammed Taresh, N. Zhu, T. Ali, Asaad Shakir Hameed, Modhi Lafta Mutar
Novel coronavirus pneumonia (COVID-19) is a contagious disease that has already caused thousands of deaths and infected millions of people worldwide. Thus, all technological gadgets that allow the fast detection of COVID- 19 infection with high accuracy can offer help to healthcare professionals. This study is purposed to explore the effectiveness of artificial intelligence (AI) in the rapid and reliable detection of COVID-19 based on chest X-ray imaging. In this study, reliable pre-trained deep learning algorithms were applied to achieve the automatic detection of COVID-19-induced pneumonia from digital chest X-ray images. Moreover, the study aims to evaluate the performance of advanced neural architectures proposed for the classification of medical images over recent years. The data set used in the experiments involves 274 COVID-19 cases, 380 viral pneumonia, and 380 healthy cases, which was derived from several open sources of X-Rays, and the data available online. The confusion matrix provided a basis for testing the post-classification model. Furthermore, an open-source library PYCM was used to support the statistical parameters. The study revealed the superiority of Model vgg16 over other models applied to conduct this research where the model performed best in terms of overall scores and based-class scores. According to the research results, deep Learning with X-ray imaging is useful in the collection of critical biological markers associated with COVID-19 infection. The technique is conducive for the physicians to make a diagnosis of COVID-19 infection. Meanwhile, the high accuracy of this computer-aided diagnostic tool can significantly improve the speed and accuracy of COVID-19 diagnosis.
{"title":"Transfer Learning to Detect COVID-19 Automatically from X-Ray Images Using Convolutional Neural Networks","authors":"Mundher Mohammed Taresh, N. Zhu, T. Ali, Asaad Shakir Hameed, Modhi Lafta Mutar","doi":"10.1101/2020.08.25.20182170","DOIUrl":"https://doi.org/10.1101/2020.08.25.20182170","url":null,"abstract":"Novel coronavirus pneumonia (COVID-19) is a contagious disease that has already caused thousands of deaths and infected millions of people worldwide. Thus, all technological gadgets that allow the fast detection of COVID- 19 infection with high accuracy can offer help to healthcare professionals. This study is purposed to explore the effectiveness of artificial intelligence (AI) in the rapid and reliable detection of COVID-19 based on chest X-ray imaging. In this study, reliable pre-trained deep learning algorithms were applied to achieve the automatic detection of COVID-19-induced pneumonia from digital chest X-ray images. Moreover, the study aims to evaluate the performance of advanced neural architectures proposed for the classification of medical images over recent years. The data set used in the experiments involves 274 COVID-19 cases, 380 viral pneumonia, and 380 healthy cases, which was derived from several open sources of X-Rays, and the data available online. The confusion matrix provided a basis for testing the post-classification model. Furthermore, an open-source library PYCM was used to support the statistical parameters. The study revealed the superiority of Model vgg16 over other models applied to conduct this research where the model performed best in terms of overall scores and based-class scores. According to the research results, deep Learning with X-ray imaging is useful in the collection of critical biological markers associated with COVID-19 infection. The technique is conducive for the physicians to make a diagnosis of COVID-19 infection. Meanwhile, the high accuracy of this computer-aided diagnostic tool can significantly improve the speed and accuracy of COVID-19 diagnosis.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2021 1","pages":""},"PeriodicalIF":7.6,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43397552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-28eCollection Date: 2020-01-01DOI: 10.1155/2020/8873865
Xiezhang Li, Guocan Feng, Jiehua Zhu
The l1-norm regularization has attracted attention for image reconstruction in computed tomography. The l0-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined l1-norm and l0-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving l0-norm regularization.
{"title":"An Algorithm of <i>l</i> <sub>1</sub>-Norm and <i>l</i> <sub>0</sub>-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection.","authors":"Xiezhang Li, Guocan Feng, Jiehua Zhu","doi":"10.1155/2020/8873865","DOIUrl":"https://doi.org/10.1155/2020/8873865","url":null,"abstract":"<p><p>The <i>l</i> <sub>1</sub>-norm regularization has attracted attention for image reconstruction in computed tomography. The <i>l</i> <sub>0</sub>-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined <i>l</i> <sub>1</sub>-norm and <i>l</i> <sub>0</sub>-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving <i>l</i> <sub>0</sub>-norm regularization.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8873865"},"PeriodicalIF":7.6,"publicationDate":"2020-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8873865","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38361996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-18eCollection Date: 2020-01-01DOI: 10.1155/2020/8828855
Mohd Zulfaezal Che Azemin, Radhiana Hassan, Mohd Izzuddin Mohd Tamrin, Mohd Adli Md Ali
The key component in deep learning research is the availability of training data sets. With a limited number of publicly available COVID-19 chest X-ray images, the generalization and robustness of deep learning models to detect COVID-19 cases developed based on these images are questionable. We aimed to use thousands of readily available chest radiograph images with clinical findings associated with COVID-19 as a training data set, mutually exclusive from the images with confirmed COVID-19 cases, which will be used as the testing data set. We used a deep learning model based on the ResNet-101 convolutional neural network architecture, which was pretrained to recognize objects from a million of images and then retrained to detect abnormality in chest X-ray images. The performance of the model in terms of area under the receiver operating curve, sensitivity, specificity, and accuracy was 0.82, 77.3%, 71.8%, and 71.9%, respectively. The strength of this study lies in the use of labels that have a strong clinical association with COVID-19 cases and the use of mutually exclusive publicly available data for training, validation, and testing.
{"title":"COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings.","authors":"Mohd Zulfaezal Che Azemin, Radhiana Hassan, Mohd Izzuddin Mohd Tamrin, Mohd Adli Md Ali","doi":"10.1155/2020/8828855","DOIUrl":"https://doi.org/10.1155/2020/8828855","url":null,"abstract":"<p><p>The key component in deep learning research is the availability of training data sets. With a limited number of publicly available COVID-19 chest X-ray images, the generalization and robustness of deep learning models to detect COVID-19 cases developed based on these images are questionable. We aimed to use thousands of readily available chest radiograph images with clinical findings associated with COVID-19 as a training data set, mutually exclusive from the images with confirmed COVID-19 cases, which will be used as the testing data set. We used a deep learning model based on the ResNet-101 convolutional neural network architecture, which was pretrained to recognize objects from a million of images and then retrained to detect abnormality in chest X-ray images. The performance of the model in terms of area under the receiver operating curve, sensitivity, specificity, and accuracy was 0.82, 77.3%, 71.8%, and 71.9%, respectively. The strength of this study lies in the use of labels that have a strong clinical association with COVID-19 cases and the use of mutually exclusive publicly available data for training, validation, and testing.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8828855"},"PeriodicalIF":7.6,"publicationDate":"2020-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8828855","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38313824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01eCollection Date: 2020-01-01DOI: 10.1155/2020/9239753
Inayatullah S Sayed, Siti S Ismail
In single photon emission computed tomography (SPECT) imaging, the choice of a suitable filter and its parameters for noise reduction purposes is a big challenge. Adverse effects on image quality arise if an improper filter is selected. Filtered back projection (FBP) is the most popular technique for image reconstruction in SPECT. With this technique, different types of reconstruction filters are used, such as the Butterworth and the Hamming. In this study, the effects on the quality of reconstructed images of the Butterworth filter were compared with the ones of the Hamming filter. A Philips ADAC forte gamma camera was used. A low-energy, high-resolution collimator was installed on the gamma camera. SPECT data were acquired by scanning a phantom with an insert composed of hot and cold regions. A Technetium-99m radioactive solution was homogenously mixed into the phantom. Furthermore, a symmetrical energy window (20%) centered at 140 keV was adjusted. Images were reconstructed by the FBP method. Various cutoff frequency values, namely, 0.35, 0.40, 0.45, and 0.50 cycles/cm, were selected for both filters, whereas for the Butterworth filter, the order was set at 7. Images of hot and cold regions were analyzed in terms of detectability, contrast, and signal-to-noise ratio (SNR). The findings of our study indicate that the Butterworth filter was able to expose more hot and cold regions in reconstructed images. In addition, higher contrast values were recorded, as compared to the Hamming filter. However, with the Butterworth filter, the decrease in SNR for both types of regions with the increase in cutoff frequency as compared to the Hamming filter was obtained. Overall, the Butterworth filter under investigation provided superior results than the Hamming filter. Effects of both filters on the quality of hot and cold region images varied with the change in cutoff frequency.
{"title":"Comparison of Low-Pass Filters for SPECT Imaging.","authors":"Inayatullah S Sayed, Siti S Ismail","doi":"10.1155/2020/9239753","DOIUrl":"https://doi.org/10.1155/2020/9239753","url":null,"abstract":"<p><p>In single photon emission computed tomography (SPECT) imaging, the choice of a suitable filter and its parameters for noise reduction purposes is a big challenge. Adverse effects on image quality arise if an improper filter is selected. Filtered back projection (FBP) is the most popular technique for image reconstruction in SPECT. With this technique, different types of reconstruction filters are used, such as the Butterworth and the Hamming. In this study, the effects on the quality of reconstructed images of the Butterworth filter were compared with the ones of the Hamming filter. A Philips ADAC forte gamma camera was used. A low-energy, high-resolution collimator was installed on the gamma camera. SPECT data were acquired by scanning a phantom with an insert composed of hot and cold regions. A Technetium-99m radioactive solution was homogenously mixed into the phantom. Furthermore, a symmetrical energy window (20%) centered at 140 keV was adjusted. Images were reconstructed by the FBP method. Various cutoff frequency values, namely, 0.35, 0.40, 0.45, and 0.50 cycles/cm, were selected for both filters, whereas for the Butterworth filter, the order was set at 7. Images of hot and cold regions were analyzed in terms of detectability, contrast, and signal-to-noise ratio (SNR). The findings of our study indicate that the Butterworth filter was able to expose more hot and cold regions in reconstructed images. In addition, higher contrast values were recorded, as compared to the Hamming filter. However, with the Butterworth filter, the decrease in SNR for both types of regions with the increase in cutoff frequency as compared to the Hamming filter was obtained. Overall, the Butterworth filter under investigation provided superior results than the Hamming filter. Effects of both filters on the quality of hot and cold region images varied with the change in cutoff frequency.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"9239753"},"PeriodicalIF":7.6,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/9239753","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37849424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bone age assessment (BAA) is an essential topic in the clinical practice of evaluating the biological maturity of children. Because the manual method is time-consuming and prone to observer variability, it is attractive to develop computer-aided and automated methods for BAA. In this paper, we present a fully automatic BAA method. To eliminate noise in a raw X-ray image, we start with using U-Net to precisely segment hand mask image from a raw X-ray image. Even though U-Net can perform the segmentation with high precision, it needs a bigger annotated dataset. To alleviate the annotation burden, we propose to use deep active learning (AL) to select unlabeled data samples with sufficient information intentionally. These samples are given to Oracle for annotation. After that, they are then used for subsequential training. In the beginning, only 300 data are manually annotated and then the improved U-Net within the AL framework can robustly segment all the 12611 images in RSNA dataset. The AL segmentation model achieved a Dice score at 0.95 in the annotated testing set. To optimize the learning process, we employ six off-the-shell deep Convolutional Neural Networks (CNNs) with pretrained weights on ImageNet. We use them to extract features of preprocessed hand images with a transfer learning technique. In the end, a variety of ensemble regression algorithms are applied to perform BAA. Besides, we choose a specific CNN to extract features and explain why we select that CNN. Experimental results show that the proposed approach achieved discrepancy between manual and predicted bone age of about 6.96 and 7.35 months for male and female cohorts, respectively, on the RSNA dataset. These accuracies are comparable to state-of-the-art performance.
骨龄评估(BAA)是评估儿童生物学成熟度的重要课题。由于手工方法耗时长,且易受观测者变化的影响,因此开发BAA的计算机辅助和自动化方法是很有吸引力的。本文提出了一种全自动BAA方法。为了消除原始x射线图像中的噪声,我们首先使用U-Net从原始x射线图像中精确分割手掩膜图像。尽管U-Net可以实现高精度的分割,但它需要更大的标注数据集。为了减轻标注负担,我们建议使用深度主动学习(deep active learning, AL)来有意地选择具有足够信息的未标记数据样本。这些示例提供给Oracle进行注释。之后,它们被用于后续的训练。最初,只有300张数据需要手工标注,然后在人工智能框架下改进的U-Net可以鲁棒分割RSNA数据集中的所有12611张图像。人工智能分割模型在标注测试集中的Dice得分为0.95。为了优化学习过程,我们在ImageNet上使用了六个具有预训练权值的现成深度卷积神经网络(cnn)。我们使用迁移学习技术提取预处理手图像的特征。最后,应用了多种集成回归算法来执行BAA。此外,我们选择一个特定的CNN来提取特征,并解释为什么我们选择该CNN。实验结果表明,该方法在RSNA数据集上实现了男性和女性队列的人工骨龄和预测骨龄分别约为6.96个月和7.35个月的差异。这些精度可与最先进的性能相媲美。
{"title":"Fully Automated Bone Age Assessment on Large-Scale Hand X-Ray Dataset.","authors":"Xiaoying Pan, Yizhe Zhao, Hao Chen, De Wei, Chen Zhao, Zhi Wei","doi":"10.1155/2020/8460493","DOIUrl":"https://doi.org/10.1155/2020/8460493","url":null,"abstract":"<p><p>Bone age assessment (BAA) is an essential topic in the clinical practice of evaluating the biological maturity of children. Because the manual method is time-consuming and prone to observer variability, it is attractive to develop computer-aided and automated methods for BAA. In this paper, we present a fully automatic BAA method. To eliminate noise in a raw X-ray image, we start with using U-Net to precisely segment hand mask image from a raw X-ray image. Even though U-Net can perform the segmentation with high precision, it needs a bigger annotated dataset. To alleviate the annotation burden, we propose to use deep active learning (AL) to select unlabeled data samples with sufficient information intentionally. These samples are given to Oracle for annotation. After that, they are then used for subsequential training. In the beginning, only 300 data are manually annotated and then the improved U-Net within the AL framework can robustly segment all the 12611 images in RSNA dataset. The AL segmentation model achieved a Dice score at 0.95 in the annotated testing set. To optimize the learning process, we employ six off-the-shell deep Convolutional Neural Networks (CNNs) with pretrained weights on ImageNet. We use them to extract features of preprocessed hand images with a transfer learning technique. In the end, a variety of ensemble regression algorithms are applied to perform BAA. Besides, we choose a specific CNN to extract features and explain why we select that CNN. Experimental results show that the proposed approach achieved discrepancy between manual and predicted bone age of about 6.96 and 7.35 months for male and female cohorts, respectively, on the RSNA dataset. These accuracies are comparable to state-of-the-art performance.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"8460493"},"PeriodicalIF":7.6,"publicationDate":"2020-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8460493","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37752031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-06eCollection Date: 2020-01-01DOI: 10.1155/2020/7862089
Sarah E Shelton, Jodi Stone, Fei Gao, Donglin Zeng, Paul A Dayton
The purpose of this study is to determine if microvascular tortuosity can be used as an imaging biomarker for the presence of tumor-associated angiogenesis and if imaging this biomarker can be used as a specific and sensitive method of locating solid tumors. Acoustic angiography, an ultrasound-based microvascular imaging technology, was used to visualize angiogenesis development of a spontaneous mouse model of breast cancer (n = 48). A reader study was used to assess visual discrimination between image types, and quantitative methods utilized metrics of tortuosity and spatial clustering for tumor detection. The reader study resulted in an area under the curve of 0.8, while the clustering approach resulted in the best classification with an area under the curve of 0.95. Both the qualitative and quantitative methods produced a correlation between sensitivity and tumor diameter. Imaging of vascular geometry with acoustic angiography provides a robust method for discriminating between tumor and healthy tissue in a mouse model of breast cancer. Multiple methods of analysis have been presented for a wide range of tumor sizes. Application of these techniques to clinical imaging could improve breast cancer diagnosis, as well as improve specificity in assessing cancer in other tissues. The clustering approach may be beneficial for other types of morphological analysis beyond vascular ultrasound images.
{"title":"Microvascular Ultrasonic Imaging of Angiogenesis Identifies Tumors in a Murine Spontaneous Breast Cancer Model.","authors":"Sarah E Shelton, Jodi Stone, Fei Gao, Donglin Zeng, Paul A Dayton","doi":"10.1155/2020/7862089","DOIUrl":"https://doi.org/10.1155/2020/7862089","url":null,"abstract":"<p><p>The purpose of this study is to determine if microvascular tortuosity can be used as an imaging biomarker for the presence of tumor-associated angiogenesis and if imaging this biomarker can be used as a specific and sensitive method of locating solid tumors. Acoustic angiography, an ultrasound-based microvascular imaging technology, was used to visualize angiogenesis development of a spontaneous mouse model of breast cancer (<i>n</i> = 48). A reader study was used to assess visual discrimination between image types, and quantitative methods utilized metrics of tortuosity and spatial clustering for tumor detection. The reader study resulted in an area under the curve of 0.8, while the clustering approach resulted in the best classification with an area under the curve of 0.95. Both the qualitative and quantitative methods produced a correlation between sensitivity and tumor diameter. Imaging of vascular geometry with acoustic angiography provides a robust method for discriminating between tumor and healthy tissue in a mouse model of breast cancer. Multiple methods of analysis have been presented for a wide range of tumor sizes. Application of these techniques to clinical imaging could improve breast cancer diagnosis, as well as improve specificity in assessing cancer in other tissues. The clustering approach may be beneficial for other types of morphological analysis beyond vascular ultrasound images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2020 ","pages":"7862089"},"PeriodicalIF":7.6,"publicationDate":"2020-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/7862089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37670230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}