Brain tumor is a deadly neurological disease caused by an abnormal and uncontrollable growth of cells inside the brain or skull. The mortality ratio of patients suffering from this disease is growing gradually. Analysing Magnetic Resonance Images (MRIs) manually is inadequate for efficient and accurate brain tumor diagnosis. An early diagnosis of the disease can activate a timely treatment consequently elevating the survival ratio of the patients. Modern brain imaging methodologies have augmented the detection ratio of brain tumor. In the past few years, a lot of research has been carried out for computer-aided diagnosis of human brain tumor to achieve 100% diagnosis accuracy. The focus of this research is on early diagnosis of brain tumor via Convolution Neural Network (CNN) to enhance state-of-the-art diagnosis accuracy. The proposed CNN is trained on a benchmark dataset, BR35H, containing brain tumor MRIs. The performance and sustainability of the model is evaluated on six different datasets, i.e., BMI-I, BTI, BMI-II, BTS, BMI-III, and BD-BT. To improve the performance of the model and to make it sustainable for totally unseen data, different geometric data augmentation techniques, along with statistical standardization, are employed. The proposed CNN-based CAD system for brain tumor diagnosis performs better than other systems by achieving an average accuracy of around 98.8% and a specificity of around 0.99. It also reveals 100% correct diagnosis for two brain MRI datasets, i.e., BTS and BD-BT. The performance of the proposed system is also compared with the other existing systems, and the analysis reveals that the proposed system outperforms all of them.
{"title":"Computer-Aided Brain Tumor Diagnosis: Performance Evaluation of Deep Learner CNN Using Augmented Brain MRI.","authors":"Asma Naseer, Tahreem Yasir, Arifah Azhar, Tanzeela Shakeel, Kashif Zafar","doi":"10.1155/2021/5513500","DOIUrl":"10.1155/2021/5513500","url":null,"abstract":"<p><p>Brain tumor is a deadly neurological disease caused by an abnormal and uncontrollable growth of cells inside the brain or skull. The mortality ratio of patients suffering from this disease is growing gradually. Analysing Magnetic Resonance Images (MRIs) manually is inadequate for efficient and accurate brain tumor diagnosis. An early diagnosis of the disease can activate a timely treatment consequently elevating the survival ratio of the patients. Modern brain imaging methodologies have augmented the detection ratio of brain tumor. In the past few years, a lot of research has been carried out for computer-aided diagnosis of human brain tumor to achieve 100% diagnosis accuracy. The focus of this research is on early diagnosis of brain tumor via Convolution Neural Network (CNN) to enhance state-of-the-art diagnosis accuracy. The proposed CNN is trained on a benchmark dataset, BR35H, containing brain tumor MRIs. The performance and sustainability of the model is evaluated on six different datasets, i.e., BMI-I, BTI, BMI-II, BTS, BMI-III, and BD-BT. To improve the performance of the model and to make it sustainable for totally unseen data, different geometric data augmentation techniques, along with statistical standardization, are employed. The proposed CNN-based CAD system for brain tumor diagnosis performs better than other systems by achieving an average accuracy of around 98.8% and a specificity of around 0.99. It also reveals 100% correct diagnosis for two brain MRI datasets, i.e., BTS and BD-BT. The performance of the proposed system is also compared with the other existing systems, and the analysis reveals that the proposed system outperforms all of them.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8216815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39162466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-26eCollection Date: 2021-01-01DOI: 10.1155/2021/9780202
Daniel J Tward, Jun Ma, Michael I Miller, Laurent Younes
[This corrects the article DOI: 10.1155/2013/205494.].
[这更正了文章DOI: 10.1155/2013/205494]。
{"title":"Corrigendum to \"Robust Diffeomorphic Mapping via Geodesically Controlled Active Shapes\".","authors":"Daniel J Tward, Jun Ma, Michael I Miller, Laurent Younes","doi":"10.1155/2021/9780202","DOIUrl":"https://doi.org/10.1155/2021/9780202","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1155/2013/205494.].</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8179761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39239836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-15eCollection Date: 2021-01-01DOI: 10.1155/2021/8828404
Mundher Mohammed Taresh, Ningbo Zhu, Talal Ahmed Ali Ali, Asaad Shakir Hameed, Modhi Lafta Mutar
The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.
{"title":"Transfer Learning to Detect COVID-19 Automatically from X-Ray Images Using Convolutional Neural Networks.","authors":"Mundher Mohammed Taresh, Ningbo Zhu, Talal Ahmed Ali Ali, Asaad Shakir Hameed, Modhi Lafta Mutar","doi":"10.1155/2021/8828404","DOIUrl":"10.1155/2021/8828404","url":null,"abstract":"<p><p>The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8203406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39057696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-14eCollection Date: 2021-01-01DOI: 10.1155/2021/6618666
Abubakar M Ashir, Salisu Ibrahim, Mohammed Abdulghani, Abdullahi Abdu Ibrahim, Mohammed S Anwar
Diabetic retinopathy is one of the leading diseases affecting eyes. Lack of early detection and treatment can lead to total blindness of the diseased eyes. Recently, numerous researchers have attempted producing automatic diabetic retinopathy detection techniques to supplement diagnosis and early treatment of diabetic retinopathy symptoms. In this manuscript, a new approach has been proposed. The proposed approach utilizes the feature extracted from the fundus image using a local extrema information with quantized Haralick features. The quantized features encode not only the textural Haralick features but also exploit the multiresolution information of numerous symptoms in diabetic retinopathy. Long Short-Term Memory network together with local extrema pattern provides a probabilistic approach to analyze each segment of the image with higher precision which helps to suppress false positive occurrences. The proposed approach analyzes the retina vasculature and hard-exudate symptoms of diabetic retinopathy on two different public datasets. The experimental results evaluated using performance matrices such as specificity, accuracy, and sensitivity reveal promising indices. Similarly, comparison with the related state-of-the-art researches highlights the validity of the proposed method. The proposed approach performs better than most of the researches used for comparison.
{"title":"Diabetic Retinopathy Detection Using Local Extrema Quantized Haralick Features with Long Short-Term Memory Network.","authors":"Abubakar M Ashir, Salisu Ibrahim, Mohammed Abdulghani, Abdullahi Abdu Ibrahim, Mohammed S Anwar","doi":"10.1155/2021/6618666","DOIUrl":"10.1155/2021/6618666","url":null,"abstract":"<p><p>Diabetic retinopathy is one of the leading diseases affecting eyes. Lack of early detection and treatment can lead to total blindness of the diseased eyes. Recently, numerous researchers have attempted producing automatic diabetic retinopathy detection techniques to supplement diagnosis and early treatment of diabetic retinopathy symptoms. In this manuscript, a new approach has been proposed. The proposed approach utilizes the feature extracted from the fundus image using a local extrema information with quantized Haralick features. The quantized features encode not only the textural Haralick features but also exploit the multiresolution information of numerous symptoms in diabetic retinopathy. Long Short-Term Memory network together with local extrema pattern provides a probabilistic approach to analyze each segment of the image with higher precision which helps to suppress false positive occurrences. The proposed approach analyzes the retina vasculature and hard-exudate symptoms of diabetic retinopathy on two different public datasets. The experimental results evaluated using performance matrices such as specificity, accuracy, and sensitivity reveal promising indices. Similarly, comparison with the related state-of-the-art researches highlights the validity of the proposed method. The proposed approach performs better than most of the researches used for comparison.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8068542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38954089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-22eCollection Date: 2021-01-01DOI: 10.1155/2021/6664569
Fayadh Alenezi, K C Santosh
One of the major shortcomings of Hopfield neural network (HNN) is that the network may not always converge to a fixed point. HNN, predominantly, is limited to local optimization during training to achieve network stability. In this paper, the convergence problem is addressed using two approaches: (a) by sequencing the activation of a continuous modified HNN (MHNN) based on the geometric correlation of features within various image hyperplanes via pixel gradient vectors and (b) by regulating geometric pixel gradient vectors. These are achieved by regularizing proposed MHNNs under cohomology, which enables them to act as an unconventional filter for pixel spectral sequences. It shifts the focus to both local and global optimizations in order to strengthen feature correlations within each image subspace. As a result, it enhances edges, information content, contrast, and resolution. The proposed algorithm was tested on fifteen different medical images, where evaluations were made based on entropy, visual information fidelity (VIF), weighted peak signal-to-noise ratio (WPSNR), contrast, and homogeneity. Our results confirmed superiority as compared to four existing benchmark enhancement methods.
{"title":"Geometric Regularized Hopfield Neural Network for Medical Image Enhancement.","authors":"Fayadh Alenezi, K C Santosh","doi":"10.1155/2021/6664569","DOIUrl":"https://doi.org/10.1155/2021/6664569","url":null,"abstract":"<p><p>One of the major shortcomings of Hopfield neural network (HNN) is that the network may not always converge to a fixed point. HNN, predominantly, is limited to local optimization during training to achieve network stability. In this paper, the convergence problem is addressed using two approaches: (a) by sequencing the activation of a continuous modified HNN (MHNN) based on the geometric correlation of features within various image hyperplanes via pixel gradient vectors and (b) by regulating geometric pixel gradient vectors. These are achieved by regularizing proposed MHNNs under cohomology, which enables them to act as an unconventional filter for pixel spectral sequences. It shifts the focus to both local and global optimizations in order to strengthen feature correlations within each image subspace. As a result, it enhances edges, information content, contrast, and resolution. The proposed algorithm was tested on fifteen different medical images, where evaluations were made based on entropy, visual information fidelity (VIF), weighted peak signal-to-noise ratio (WPSNR), contrast, and homogeneity. Our results confirmed superiority as compared to four existing benchmark enhancement methods.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7847341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25341347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
El-Sayed H Ibrahim, Luba Frank, Dhiraj Baruah, V Emre Arpinar, Andrew S Nencka, Kevin M Koch, L Tugan Muftuler, Orhan Unal, Jadranka Stojanovska, Jason C Rubenstein, Sherry-Ann Brown, John Charlson, Elizabeth M Gore, Carmen Bergom
Cardiac magnetic resonance imaging (CMR) is considered the gold standard for measuring cardiac function. Further, in a single CMR exam, information about cardiac structure, tissue composition, and blood flow could be obtained. Nevertheless, CMR is underutilized due to long scanning times, the need for multiple breath-holds, use of a contrast agent, and relatively high cost. In this work, we propose a rapid, comprehensive, contrast-free CMR exam that does not require repeated breath-holds, based on recent developments in imaging sequences. Time-consuming conventional sequences have been replaced by advanced sequences in the proposed CMR exam. Specifically, conventional 2D cine and phase-contrast (PC) sequences have been replaced by optimized 3D-cine and 4D-flow sequences, respectively. Furthermore, conventional myocardial tagging has been replaced by fast strain-encoding (SENC) imaging. Finally, T1 and T2 mapping sequences are included in the proposed exam, which allows for myocardial tissue characterization. The proposed rapid exam has been tested in vivo. The proposed exam reduced the scan time from >1 hour with conventional sequences to <20 minutes. Corresponding cardiovascular measurements from the proposed rapid CMR exam showed good agreement with those from conventional sequences and showed that they can differentiate between healthy volunteers and patients. Compared to 2D cine imaging that requires 12-16 separate breath-holds, the implemented 3D-cine sequence allows for whole heart coverage in 1-2 breath-holds. The 4D-flow sequence allows for whole-chest coverage in less than 10 minutes. Finally, SENC imaging reduces scan time to only one slice per heartbeat. In conclusion, the proposed rapid, contrast-free, and comprehensive cardiovascular exam does not require repeated breath-holds or to be supervised by a cardiac imager. These improvements make it tolerable by patients and would help improve cost effectiveness of CMR and increase its adoption in clinical practice.
{"title":"Value CMR: Towards a Comprehensive, Rapid, Cost-Effective Cardiovascular Magnetic Resonance Imaging.","authors":"El-Sayed H Ibrahim, Luba Frank, Dhiraj Baruah, V Emre Arpinar, Andrew S Nencka, Kevin M Koch, L Tugan Muftuler, Orhan Unal, Jadranka Stojanovska, Jason C Rubenstein, Sherry-Ann Brown, John Charlson, Elizabeth M Gore, Carmen Bergom","doi":"10.1155/2021/8851958","DOIUrl":"https://doi.org/10.1155/2021/8851958","url":null,"abstract":"<p><p>Cardiac magnetic resonance imaging (CMR) is considered the gold standard for measuring cardiac function. Further, in a single CMR exam, information about cardiac structure, tissue composition, and blood flow could be obtained. Nevertheless, CMR is underutilized due to long scanning times, the need for multiple breath-holds, use of a contrast agent, and relatively high cost. In this work, we propose a rapid, comprehensive, contrast-free CMR exam that does not require repeated breath-holds, based on recent developments in imaging sequences. Time-consuming conventional sequences have been replaced by advanced sequences in the proposed CMR exam. Specifically, conventional 2D cine and phase-contrast (PC) sequences have been replaced by optimized 3D-cine and 4D-flow sequences, respectively. Furthermore, conventional myocardial tagging has been replaced by fast strain-encoding (SENC) imaging. Finally, T1 and T2 mapping sequences are included in the proposed exam, which allows for myocardial tissue characterization. The proposed rapid exam has been tested in vivo. The proposed exam reduced the scan time from >1 hour with conventional sequences to <20 minutes. Corresponding cardiovascular measurements from the proposed rapid CMR exam showed good agreement with those from conventional sequences and showed that they can differentiate between healthy volunteers and patients. Compared to 2D cine imaging that requires 12-16 separate breath-holds, the implemented 3D-cine sequence allows for whole heart coverage in 1-2 breath-holds. The 4D-flow sequence allows for whole-chest coverage in less than 10 minutes. Finally, SENC imaging reduces scan time to only one slice per heartbeat. In conclusion, the proposed rapid, contrast-free, and comprehensive cardiovascular exam does not require repeated breath-holds or to be supervised by a cardiac imager. These improvements make it tolerable by patients and would help improve cost effectiveness of CMR and increase its adoption in clinical practice.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8147553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9653604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the ClearT2 tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 μm from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 μm in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that ClearT2 tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.
{"title":"Three-Dimensional Imaging of Pulmonary Fibrotic Foci at the Alveolar Scale Using Tissue-Clearing Treatment with Staining Techniques of Extracellular Matrix.","authors":"Kohei Togami, Hiroaki Ozaki, Yuki Yumita, Anri Kitayama, Hitoshi Tada, Sumio Chono","doi":"10.1155/2020/8815231","DOIUrl":"https://doi.org/10.1155/2020/8815231","url":null,"abstract":"<p><p>Idiopathic pulmonary fibrosis is a progressive, chronic lung disease characterized by the accumulation of extracellular matrix proteins, including collagen and elastin. Imaging of extracellular matrix in fibrotic lungs is important for evaluating its pathological condition as well as the distribution of drugs to pulmonary focus sites and their therapeutic effects. In this study, we compared techniques of staining the extracellular matrix with optical tissue-clearing treatment for developing three-dimensional imaging methods for focus sites in pulmonary fibrosis. Mouse models of pulmonary fibrosis were prepared via the intrapulmonary administration of bleomycin. Fluorescent-labeled tomato lectin, collagen I antibody, and Col-F, which is a fluorescent probe for collagen and elastin, were used to compare the imaging of fibrotic foci in intact fibrotic lungs. These lung samples were cleared using the Clear<sup>T2</sup> tissue-clearing technique. The cleared lungs were two dimensionally observed using laser-scanning confocal microscopy, and the images were compared with those of the lung tissue sections. Moreover, three-dimensional images were reconstructed from serial two-dimensional images. Fluorescent-labeled tomato lectin did not enable the visualization of fibrotic foci in cleared fibrotic lungs. Although collagen I in fibrotic lungs could be visualized via immunofluorescence staining, collagen I was clearly visible only until 40 <i>μ</i>m from the lung surface. Col-F staining facilitated the visualization of collagen and elastin to a depth of 120 <i>μ</i>m in cleared lung tissues. Furthermore, we visualized the three-dimensional extracellular matrix in cleared fibrotic lungs using Col-F, and the images provided better visualization than immunofluorescence staining. These results suggest that Clear<sup>T2</sup> tissue-clearing treatment combined with Col-F staining represents a simple and rapid technique for imaging fibrotic foci in intact fibrotic lungs. This study provides important information for imaging various organs with extracellular matrix-related diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7787752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38827591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-18eCollection Date: 2020-01-01DOI: 10.1155/2020/8846220
Wei He, Yu Zhang, Junling Ding, Linman Zhao
The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to "pre-process" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.
{"title":"A Modified Phase Cycling Method for Complex-Valued MRI Reconstruction.","authors":"Wei He, Yu Zhang, Junling Ding, Linman Zhao","doi":"10.1155/2020/8846220","DOIUrl":"https://doi.org/10.1155/2020/8846220","url":null,"abstract":"<p><p>The phase cycling method is a state-of-the-art method to reconstruct complex-valued MR image. However, when it follows practical two-dimensional (2D) subsampling Cartesian acquisition which is only enforcing random sampling in the phase-encoding direction, a number of artifacts in magnitude appear. A modified approach is proposed to remove these artifacts under practical MRI subsampling, by adding one-dimensional total variation (TV) regularization into the phase cycling method to \"pre-process\" the magnitude component before its update. Furthermore, an operation used in SFISTA is employed to update the magnitude and phase images for better solutions. The results of the experiments show the ability of the proposed method to eliminate the ring artifacts and improve the magnitude reconstruction.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1155/2020/8846220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38680662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.
在小儿骨龄自动评估(BAA)的临床实践研究中,手部X光片中物体区域的提取是一个重要环节,它直接影响到骨龄自动评估的预测准确性。但目前尚未找到完美的分割方案。本研究旨在开发一种高精度、高效率的手部 X 光片自动分割方法。我们将手部分割任务视为一个分类问题。每张图像的最佳分割阈值被视为预测目标。我们利用每张图像的归一化直方图、平均值和方差作为输入特征,基于多个分类器的集合学习来训练分类模型。数据集包括 600 张骨龄在 1 至 18 岁之间的左侧 X 光片。与传统的分割方法和最先进的 U-Net 网络相比,所提出的方法精度更高、计算量更小,平均 PSNR 为 52.43 dB,SSIM 为 0.97,DSC 为 0.97,JSI 为 0.91,更适合临床应用。此外,实验结果还验证了手部 X 光片分割可使 BAA 性能平均提高至少 13%。
{"title":"Ensemble Learning with Multiclassifiers on Pediatric Hand Radiograph Segmentation for Bone Age Assessment.","authors":"Rui Liu, Yuanyuan Jia, Xiangqian He, Zhe Li, Jinhua Cai, Hao Li, Xiao Yang","doi":"10.1155/2020/8866700","DOIUrl":"10.1155/2020/8866700","url":null,"abstract":"<p><p>In the study of pediatric automatic bone age assessment (BAA) in clinical practice, the extraction of the object area in hand radiographs is an important part, which directly affects the prediction accuracy of the BAA. But no perfect segmentation solution has been found yet. This work is to develop an automatic hand radiograph segmentation method with high precision and efficiency. We considered the hand segmentation task as a classification problem. The optimal segmentation threshold for each image was regarded as the prediction target. We utilized the normalized histogram, mean value, and variance of each image as input features to train the classification model, based on ensemble learning with multiple classifiers. 600 left-hand radiographs with the bone age ranging from 1 to 18 years old were included in the dataset. Compared with traditional segmentation methods and the state-of-the-art U-Net network, the proposed method performed better with a higher precision and less computational load, achieving an average PSNR of 52.43 dB, SSIM of 0.97, DSC of 0.97, and JSI of 0.91, which is more suitable in clinical application. Furthermore, the experimental results also verified that hand radiograph segmentation could bring an average improvement for BAA performance of at least 13%.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7609149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38593312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-06eCollection Date: 2020-01-01DOI: 10.1155/2020/8889023
Arun Sharma, Sheeba Rani, Dinesh Gupta
The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.
冠状病毒病 2019(COVID-19)的持续大流行除了造成巨大的社会经济影响外,还引发了全球健康和医疗保健危机。这场危机的重大挑战之一是如何快速有效地识别和监测 COVID-19 患者,以便及时做出治疗、监测和管理决策。研究人员正在努力开发耗时较少的方法,以取代或补充基于 RT-PCR 的方法。本研究旨在利用胸部 X 光图像创建高效的深度学习模型,用于快速筛查 COVID-19 患者。我们使用公开的成人 COVID-19 患者的 PA 胸部 X 光图像来开发基于人工智能 (AI) 的 COVID-19 和其他主要传染病分类模型。为了扩大数据集规模并开发通用模型,我们对原始图像进行了 25 种不同类型的增强。此外,我们还利用迁移学习方法来训练和测试分类模型。两个表现最好的模型(每个模型都在旋转 120° 或 140° 角的 286 幅图像上进行了训练)的组合对正常图像、COVID-19 图像、非 COVID-19 图像、肺炎图像和肺结核图像的预测准确率最高。通过迁移学习方法训练的人工智能分类模型能有效地对代表所研究疾病的胸部 X 光图像进行分类。我们的方法比以前公布的方法更有效。这为基于人工智能的方法解决与 COVID-19 相关的生物医学成像分类问题迈出了一步。
{"title":"Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases.","authors":"Arun Sharma, Sheeba Rani, Dinesh Gupta","doi":"10.1155/2020/8889023","DOIUrl":"10.1155/2020/8889023","url":null,"abstract":"<p><p>The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2020-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7539085/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38498557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}