Pub Date : 2024-10-28eCollection Date: 2024-01-01DOI: 10.1155/2024/5691909
Martin Segeroth, David Jean Winkel, Beat A Kaufmann, Ivo Strebel, Shan Yang, Joshy Cyriac, Jakob Wasserthal, Michael Bach, Pedro Lopez-Ayala, Alexander Sauter, Christian Mueller, Jens Bremerich, Michael Zellweger, Philip Haaf
Introduction: Pulmonary transit time (PTT) is the time it takes blood to pass from the right ventricle to the left ventricle via the pulmonary circulation, making it a potentially useful marker for heart failure. We assessed the association of PTT with diastolic dysfunction (DD) and mitral valve regurgitation (MVR). Methods: We evaluated routine stress perfusion cardiovascular magnetic resonance (CMR) scans in 83 patients including assessment of PTT with simultaneously available echocardiographic assessment. Relevant DD and MVR were defined as exceeding Grade I (impaired relaxation and mild regurgitation). PTT was determined from CMR rest perfusion scans. Normalized PTT (nPTT), adjusted for heart rate, was calculated using Bazett's formula. Results: Higher PTT and nPTT values were associated with higher grade DD and MVR. The diagnostic accuracy for the prediction of DD as quantified by the area under the ROC curve (AUC) was 0.73 (CI 0.61-0.85; p = 0.001) for PTT and 0.81 (CI 0.71-0.89; p < 0.001) for nPTT. For MVR, the diagnostic performance amounted to an AUC of 0.80 (CI 0.68-0.92; p < 0.001) for PTT and 0.78 (CI 0.65-0.90; p < 0.001) for nPTT. PTT values < 8 s rule out the presence of DD and MVR with a probability of 70% (negative predictive value 78%). Conclusion: CMR-derived PTT is a readily obtainable hemodynamic parameter. It is elevated in patients with DD and moderate to severe MVR. Low PTT values make the presence of DD and MVR-as assessed by echocardiography-unlikely.
简介肺循环转运时间(PTT)是指血液从右心室经肺循环进入左心室所需的时间,因此它可能是心力衰竭的一个有用标记。我们评估了 PTT 与舒张功能障碍(DD)和二尖瓣反流(MVR)的关系。方法我们评估了 83 例患者的常规压力灌注心血管磁共振(CMR)扫描,包括 PTT 评估和同时进行的超声心动图评估。相关的 DD 和 MVR 被定义为超过 I 级(松弛受损和轻度反流)。根据 CMR 静息灌注扫描确定 PTT。使用巴泽特公式计算归一化 PTT(nPTT),并根据心率进行调整。结果较高的 PTT 和 nPTT 值与较高级别的 DD 和 MVR 相关。以 ROC 曲线下面积(AUC)量化的 DD 预测诊断准确率为:PTT 0.73(CI 0.61-0.85;p = 0.001),nPTT 0.81(CI 0.71-0.89;p < 0.001)。对于 MVR,PTT 的 AUC 为 0.80 (CI 0.68-0.92; p < 0.001),nPTT 为 0.78 (CI 0.65-0.90; p < 0.001)。PTT 值小于 8 秒可排除 DD 和 MVR 的可能性为 70%(阴性预测值为 78%)。结论CMR 导出的 PTT 是一个易于获得的血液动力学参数。DD 和中重度 MVR 患者的 PTT 值会升高。低 PTT 值使得超声心动图评估的 DD 和 MVR 不可能存在。
{"title":"Noninvasive Assessment of Cardiopulmonary Hemodynamics Using Cardiovascular Magnetic Resonance Pulmonary Transit Time.","authors":"Martin Segeroth, David Jean Winkel, Beat A Kaufmann, Ivo Strebel, Shan Yang, Joshy Cyriac, Jakob Wasserthal, Michael Bach, Pedro Lopez-Ayala, Alexander Sauter, Christian Mueller, Jens Bremerich, Michael Zellweger, Philip Haaf","doi":"10.1155/2024/5691909","DOIUrl":"10.1155/2024/5691909","url":null,"abstract":"<p><p><b>Introduction:</b> Pulmonary transit time (PTT) is the time it takes blood to pass from the right ventricle to the left ventricle via the pulmonary circulation, making it a potentially useful marker for heart failure. We assessed the association of PTT with diastolic dysfunction (DD) and mitral valve regurgitation (MVR). <b>Methods:</b> We evaluated routine stress perfusion cardiovascular magnetic resonance (CMR) scans in 83 patients including assessment of PTT with simultaneously available echocardiographic assessment. Relevant DD and MVR were defined as exceeding Grade I (impaired relaxation and mild regurgitation). PTT was determined from CMR rest perfusion scans. Normalized PTT (nPTT), adjusted for heart rate, was calculated using Bazett's formula. <b>Results:</b> Higher PTT and nPTT values were associated with higher grade DD and MVR. The diagnostic accuracy for the prediction of DD as quantified by the area under the ROC curve (AUC) was 0.73 (CI 0.61-0.85; <i>p</i> = 0.001) for PTT and 0.81 (CI 0.71-0.89; <i>p</i> < 0.001) for nPTT. For MVR, the diagnostic performance amounted to an AUC of 0.80 (CI 0.68-0.92; <i>p</i> < 0.001) for PTT and 0.78 (CI 0.65-0.90; <i>p</i> < 0.001) for nPTT. PTT values < 8 s rule out the presence of DD and MVR with a probability of 70% (negative predictive value 78%). <b>Conclusion:</b> CMR-derived PTT is a readily obtainable hemodynamic parameter. It is elevated in patients with DD and moderate to severe MVR. Low PTT values make the presence of DD and MVR-as assessed by echocardiography-unlikely.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"5691909"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11535428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11eCollection Date: 2024-01-01DOI: 10.1155/2024/2244875
Shapoor Shirani, Najmeh-Sadat Mousavi, Milad Ali Talib, Mohammad Ali Bagheri, Elahe Jazayeri Gharebagh, Qasim Abdulsahib Jaafar Hameed, Sadegh Dehghani
Background: Three-dimensional gradient-echo (3D-GRE) sequences provide isotropic or nearly isotropic 3D images, leading to better visualization of smaller structures, compared to two-dimensional (2D) sequences. The aim of this study was to prospectively compare 2D and 3D-GRE sequences in terms of key imaging metrics, including signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), glenohumeral joint space, image quality, artifacts, and acquisition time in shoulder joint images, using 1.5-T MRI scanner. Methods: Thirty-five normal volunteers with no history of shoulder disorders prospectively underwent a shoulder MRI examination with conventional 2D sequences, including T1- and T2-weighted fast spin echo (T1/T2w FSE) as well as proton density-weighted FSE with fat saturation (PD-FS) followed by 3D-GRE sequences including VIBE, TRUEFISP, DESS, and MEDIC techniques. Two independent reviewers assessed all images of the shoulder joints. Pearson correlation coefficient and intra-RR were used for reliability test. Results: Among 3D-GRE sequences, TRUEFISP showed significantly the best CNR between cartilage-bone (31.37 ± 2.57, p < 0.05) and cartilage-muscle (13.51 ± 1.14, p < 0.05). TRUEFISP also showed the highest SNR for cartilage (41.65 ± 2.19, p < 0.01) and muscle (26.71 ± 0.79, p < 0.05). Furthermore, 3D-GRE sequences showed significantly higher image quality, compared to 2D sequences (p < 0.001). Moreover, the acquisition time of the 3D-GRE sequences was considerably shorter than the total acquisition time of PD-FS sequences in three orientations (p < 0.01). Conclusions: 3D-GRE sequences provide superior image quality and efficiency for evaluating articular joints, particularly in shoulder imaging. The TRUEFISP technique offers the best contrast and signal quality, making it a valuable tool in clinical practice.
{"title":"Comparison of 3D Gradient-Echo Versus 2D Sequences for Assessing Shoulder Joint Image Quality in MRI.","authors":"Shapoor Shirani, Najmeh-Sadat Mousavi, Milad Ali Talib, Mohammad Ali Bagheri, Elahe Jazayeri Gharebagh, Qasim Abdulsahib Jaafar Hameed, Sadegh Dehghani","doi":"10.1155/2024/2244875","DOIUrl":"10.1155/2024/2244875","url":null,"abstract":"<p><p><b>Background:</b> Three-dimensional gradient-echo (3D-GRE) sequences provide isotropic or nearly isotropic 3D images, leading to better visualization of smaller structures, compared to two-dimensional (2D) sequences. The aim of this study was to prospectively compare 2D and 3D-GRE sequences in terms of key imaging metrics, including signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), glenohumeral joint space, image quality, artifacts, and acquisition time in shoulder joint images, using 1.5-T MRI scanner. <b>Methods:</b> Thirty-five normal volunteers with no history of shoulder disorders prospectively underwent a shoulder MRI examination with conventional 2D sequences, including <i>T</i> <sub>1</sub>- and <i>T</i> <sub>2</sub>-weighted fast spin echo (T<sub>1</sub>/T<sub>2</sub>w FSE) as well as proton density-weighted FSE with fat saturation (PD-FS) followed by 3D-GRE sequences including VIBE, TRUEFISP, DESS, and MEDIC techniques. Two independent reviewers assessed all images of the shoulder joints. Pearson correlation coefficient and intra-RR were used for reliability test. <b>Results:</b> Among 3D-GRE sequences, TRUEFISP showed significantly the best CNR between cartilage-bone (31.37 ± 2.57, <i>p</i> < 0.05) and cartilage-muscle (13.51 ± 1.14, <i>p</i> < 0.05). TRUEFISP also showed the highest SNR for cartilage (41.65 ± 2.19, <i>p</i> < 0.01) and muscle (26.71 ± 0.79, <i>p</i> < 0.05). Furthermore, 3D-GRE sequences showed significantly higher image quality, compared to 2D sequences (<i>p</i> < 0.001). Moreover, the acquisition time of the 3D-GRE sequences was considerably shorter than the total acquisition time of PD-FS sequences in three orientations (<i>p</i> < 0.01). <b>Conclusions:</b> 3D-GRE sequences provide superior image quality and efficiency for evaluating articular joints, particularly in shoulder imaging. The TRUEFISP technique offers the best contrast and signal quality, making it a valuable tool in clinical practice.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"2244875"},"PeriodicalIF":3.3,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11489005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26eCollection Date: 2024-01-01DOI: 10.1155/2024/4482931
Aiwen Chen, Gavin Volpato, Alice Pong, Emma Schofield, Jun Huang, Zizhao Qiu, George Paxinos, Huazheng Liang
Background: The blood-brain barrier (BBB) is part of the neurovascular unit (NVU) which plays a key role in maintaining homeostasis. However, its 3D structure is hardly known. The present study is aimed at imaging the BBB using tissue clearing and 3D imaging techniques in both human brain tissue and rat brain tissue. Methods: Both human and rat brain tissue were cleared using the CUBIC technique and imaged with either a confocal or two-photon microscope. Image stacks were reconstructed using Imaris. Results: Double staining with various antibodies targeting endothelial cells, basal membrane, pericytes of blood vessels, microglial cells, and the spatial relationship between astrocytes and blood vessels showed that endothelial cells do not evenly express CD31 and Glut1 transporter in the human brain. Astrocytes covered only a small portion of the vessels as shown by the overlap between GFAP-positive astrocytes and Collagen IV/CD31-positive endothelial cells as well as between GFAP-positive astrocytes and CD146-positive pericytes, leaving a big gap between their end feet. A similar structure was observed in the rat brain. Conclusions: The present study demonstrated the 3D structure of both the human and rat BBB, which is discrepant from the 2D one. Tissue clearing and 3D imaging are promising techniques to answer more questions about the real structure of biological specimens.
{"title":"The Blood-Brain Barrier in Both Humans and Rats: A Perspective From 3D Imaging.","authors":"Aiwen Chen, Gavin Volpato, Alice Pong, Emma Schofield, Jun Huang, Zizhao Qiu, George Paxinos, Huazheng Liang","doi":"10.1155/2024/4482931","DOIUrl":"10.1155/2024/4482931","url":null,"abstract":"<p><p><b>Background:</b> The blood-brain barrier (BBB) is part of the neurovascular unit (NVU) which plays a key role in maintaining homeostasis. However, its 3D structure is hardly known. The present study is aimed at imaging the BBB using tissue clearing and 3D imaging techniques in both human brain tissue and rat brain tissue. <b>Methods:</b> Both human and rat brain tissue were cleared using the CUBIC technique and imaged with either a confocal or two-photon microscope. Image stacks were reconstructed using Imaris. <b>Results:</b> Double staining with various antibodies targeting endothelial cells, basal membrane, pericytes of blood vessels, microglial cells, and the spatial relationship between astrocytes and blood vessels showed that endothelial cells do not evenly express CD31 and Glut1 transporter in the human brain. Astrocytes covered only a small portion of the vessels as shown by the overlap between GFAP-positive astrocytes and Collagen IV/CD31-positive endothelial cells as well as between GFAP-positive astrocytes and CD146-positive pericytes, leaving a big gap between their end feet. A similar structure was observed in the rat brain. <b>Conclusions:</b> The present study demonstrated the 3D structure of both the human and rat BBB, which is discrepant from the 2D one. Tissue clearing and 3D imaging are promising techniques to answer more questions about the real structure of biological specimens.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"4482931"},"PeriodicalIF":3.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11368551/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09eCollection Date: 2024-01-01DOI: 10.1155/2024/9422083
Urvi Oza, Bakul Gohel, Pankaj Kumar, Parita Oza
Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.
{"title":"Presegmenter Cascaded Framework for Mammogram Mass Segmentation.","authors":"Urvi Oza, Bakul Gohel, Pankaj Kumar, Parita Oza","doi":"10.1155/2024/9422083","DOIUrl":"10.1155/2024/9422083","url":null,"abstract":"<p><p>Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"9422083"},"PeriodicalIF":3.3,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329304/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06eCollection Date: 2024-01-01DOI: 10.1155/2024/4960630
Shixin Lai, Weipiao Kang, Yaowen Chen, Jisheng Zou, Siqi Wang, Xuan Zhang, Xiaolei Zhang, Yu Lin
Chronic rhinosinusitis (CRS) is a global disease characterized by poor treatment outcomes and high recurrence rates, significantly affecting patients' quality of life. Due to its complex pathophysiology and diverse clinical presentations, CRS is categorized into various subtypes to facilitate more precise diagnosis, treatment, and prognosis prediction. Among these, CRS with nasal polyps (CRSwNP) is further divided into eosinophilic CRSwNP (eCRSwNP) and noneosinophilic CRSwNP (non-eCRSwNP). However, there is a lack of precise predictive diagnostic and treatment methods, making research into accurate diagnostic techniques for CRSwNP endotypes crucial for achieving precision medicine in CRSwNP. This paper proposes a method using multiangle sinus computed tomography (CT) images combined with artificial intelligence (AI) to predict CRSwNP endotypes, distinguishing between patients with eCRSwNP and non-eCRSwNP. The considered dataset comprises 22,265 CT images from 192 CRSwNP patients, including 13,203 images from non-eCRSwNP patients and 9,062 images from eCRSwNP patients. Test results from the network model demonstrate that multiangle images provide more useful information for the network, achieving an accuracy of 98.43%, precision of 98.1%, recall of 98.1%, specificity of 98.7%, and an AUC value of 0.984. Compared to the limited learning capacity of single-channel neural networks, our proposed multichannel feature adaptive fusion model captures multiscale spatial features, enhancing the model's focus on crucial sinus information within the CT images to maximize detection accuracy. This deep learning-based diagnostic model for CRSwNP endotypes offers excellent classification performance, providing a noninvasive method for accurately predicting CRSwNP endotypes before treatment and paving the way for precision medicine in the new era of CRSwNP.
{"title":"An End-to-End CRSwNP Prediction with Multichannel ResNet on Computed Tomography.","authors":"Shixin Lai, Weipiao Kang, Yaowen Chen, Jisheng Zou, Siqi Wang, Xuan Zhang, Xiaolei Zhang, Yu Lin","doi":"10.1155/2024/4960630","DOIUrl":"10.1155/2024/4960630","url":null,"abstract":"<p><p>Chronic rhinosinusitis (CRS) is a global disease characterized by poor treatment outcomes and high recurrence rates, significantly affecting patients' quality of life. Due to its complex pathophysiology and diverse clinical presentations, CRS is categorized into various subtypes to facilitate more precise diagnosis, treatment, and prognosis prediction. Among these, CRS with nasal polyps (CRSwNP) is further divided into eosinophilic CRSwNP (eCRSwNP) and noneosinophilic CRSwNP (non-eCRSwNP). However, there is a lack of precise predictive diagnostic and treatment methods, making research into accurate diagnostic techniques for CRSwNP endotypes crucial for achieving precision medicine in CRSwNP. This paper proposes a method using multiangle sinus computed tomography (CT) images combined with artificial intelligence (AI) to predict CRSwNP endotypes, distinguishing between patients with eCRSwNP and non-eCRSwNP. The considered dataset comprises 22,265 CT images from 192 CRSwNP patients, including 13,203 images from non-eCRSwNP patients and 9,062 images from eCRSwNP patients. Test results from the network model demonstrate that multiangle images provide more useful information for the network, achieving an accuracy of 98.43%, precision of 98.1%, recall of 98.1%, specificity of 98.7%, and an AUC value of 0.984. Compared to the limited learning capacity of single-channel neural networks, our proposed multichannel feature adaptive fusion model captures multiscale spatial features, enhancing the model's focus on crucial sinus information within the CT images to maximize detection accuracy. This deep learning-based diagnostic model for CRSwNP endotypes offers excellent classification performance, providing a noninvasive method for accurately predicting CRSwNP endotypes before treatment and paving the way for precision medicine in the new era of CRSwNP.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"4960630"},"PeriodicalIF":7.6,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06eCollection Date: 2024-01-01DOI: 10.1155/2024/1397875
Sophia Ackermann, Maximilian Herold, Vincent Rohrbacher, Michael Schäfer, Marcell Tóth, Stefan Thomann, Thilo Hackert, Eduard Ryschich
Purpose: Surgical resection is the only curative option for pancreatic carcinoma, but disease-free and overall survival times after surgery are limited due to early tumor recurrence, most often originating from local microscopic tumor residues (R1 resection). The intraoperative identification of microscopic tumor residues within the resection margin in situ could improve surgical performance. The aim of this study was to evaluate the effectiveness of fiber-optic microscopy for detecting microscopic residues in vital pancreatic cancer tissues. Experimental Design. Fresh whole-mount human pancreatic tissues, histological tissue slides, cell culture, and chorioallantoic membrane xenografts were analyzed. Specimens were stained with selected fluorophore-conjugated antibodies and studied using conventional wide-field and self-designed multicolor fiber-optic fluorescence microscopy instruments.
Results: Whole-mount vital human tissues and xenografts were stained and imaged using an in situ immunofluorescence protocol. Fiber-optic microscopy enabled the detection of epitope-based fluorescence in vital whole-mount tissue using fluorophore-conjugated antibodies and enabled visualization of microvascular, epithelial, and malignant tumor cells. Among the selected antigen-antibody pairs, antibody clones WM59, AY13, and 9C4 were the most promising for fiber-optic imaging in human tissue samples and for endothelial, tumor and epithelial cell detection.
Conclusions: Fresh dissected whole-mount tissue can be stained using direct exposure to selected antibody clones. Several antibody clones were identified that provided excellent immunofluorescence imaging of labeled structures, such as endothelial, epithelial, or EGFR-expressing cells. The combination of in situ immunofluorescence staining and fiber-optic microscopy visualizes structures in vital tissues and could be proposed as an useful tool for the in situ identification of residual tumor mass in patients with a high operative risk for incomplete resection.
{"title":"<i>In Situ</i> Immunofluorescence Imaging of Vital Human Pancreatic Tissue Using Fiber-Optic Microscopy.","authors":"Sophia Ackermann, Maximilian Herold, Vincent Rohrbacher, Michael Schäfer, Marcell Tóth, Stefan Thomann, Thilo Hackert, Eduard Ryschich","doi":"10.1155/2024/1397875","DOIUrl":"10.1155/2024/1397875","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical resection is the only curative option for pancreatic carcinoma, but disease-free and overall survival times after surgery are limited due to early tumor recurrence, most often originating from local microscopic tumor residues (R1 resection). The intraoperative identification of microscopic tumor residues within the resection margin <i>in situ</i> could improve surgical performance. The aim of this study was to evaluate the effectiveness of fiber-optic microscopy for detecting microscopic residues in vital pancreatic cancer tissues. <i>Experimental Design</i>. Fresh whole-mount human pancreatic tissues, histological tissue slides, cell culture, and chorioallantoic membrane xenografts were analyzed. Specimens were stained with selected fluorophore-conjugated antibodies and studied using conventional wide-field and self-designed multicolor fiber-optic fluorescence microscopy instruments.</p><p><strong>Results: </strong>Whole-mount vital human tissues and xenografts were stained and imaged using an <i>in situ</i> immunofluorescence protocol. Fiber-optic microscopy enabled the detection of epitope-based fluorescence in vital whole-mount tissue using fluorophore-conjugated antibodies and enabled visualization of microvascular, epithelial, and malignant tumor cells. Among the selected antigen-antibody pairs, antibody clones WM59, AY13, and 9C4 were the most promising for fiber-optic imaging in human tissue samples and for endothelial, tumor and epithelial cell detection.</p><p><strong>Conclusions: </strong>Fresh dissected whole-mount tissue can be stained using direct exposure to selected antibody clones. Several antibody clones were identified that provided excellent immunofluorescence imaging of labeled structures, such as endothelial, epithelial, or EGFR-expressing cells. The combination of <i>in situ</i> immunofluorescence staining and fiber-optic microscopy visualizes structures in vital tissues and could be proposed as an useful tool for the <i>in situ</i> identification of residual tumor mass in patients with a high operative risk for incomplete resection.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"1397875"},"PeriodicalIF":7.6,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-24eCollection Date: 2024-01-01DOI: 10.1155/2024/9962839
Kenan Morani, Esra Kaya Ayana, Dimitrios Kollias, Devrim Unay
This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images using a lean transfer learning-based model. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 × 224) were input into an Xception transfer learning model with a modified output. Both Xception's architecture and pretrained weights were leveraged in the method. A big and rigorously annotated database of CT images was used to verify the method. The number of patients/subjects in the dataset is more than 5000, and the number and shape of the slices in each CT scan varies greatly. Verification was made both on the validation partition and on the test partition of unseen images. Results on the COV19-CT database showcased not only improvement from our previous solution and the baseline but also comparable performance to the highest-achieving methods on the same dataset. Further validation studies could explore the scalability and adaptability of the developed methodologies across diverse healthcare settings and patient populations. Additionally, investigating the integration of advanced image processing techniques, such as automated region of interest detection and segmentation algorithms, could enhance the efficiency and accuracy of COVID-19 diagnosis.
{"title":"COVID-19 Detection from Computed Tomography Images Using Slice Processing Techniques and a Modified Xception Classifier.","authors":"Kenan Morani, Esra Kaya Ayana, Dimitrios Kollias, Devrim Unay","doi":"10.1155/2024/9962839","DOIUrl":"10.1155/2024/9962839","url":null,"abstract":"<p><p>This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images using a lean transfer learning-based model. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 × 224) were input into an Xception transfer learning model with a modified output. Both Xception's architecture and pretrained weights were leveraged in the method. A big and rigorously annotated database of CT images was used to verify the method. The number of patients/subjects in the dataset is more than 5000, and the number and shape of the slices in each CT scan varies greatly. Verification was made both on the validation partition and on the test partition of unseen images. Results on the COV19-CT database showcased not only improvement from our previous solution and the baseline but also comparable performance to the highest-achieving methods on the same dataset. Further validation studies could explore the scalability and adaptability of the developed methodologies across diverse healthcare settings and patient populations. Additionally, investigating the integration of advanced image processing techniques, such as automated region of interest detection and segmentation algorithms, could enhance the efficiency and accuracy of COVID-19 diagnosis.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"9962839"},"PeriodicalIF":7.6,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-02eCollection Date: 2024-01-01DOI: 10.1155/2024/8972980
Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim
We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.
{"title":"Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction.","authors":"Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim","doi":"10.1155/2024/8972980","DOIUrl":"10.1155/2024/8972980","url":null,"abstract":"<p><p>We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8972980"},"PeriodicalIF":7.6,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11081754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29eCollection Date: 2024-01-01DOI: 10.1155/2024/6347920
N I Md Ashafuddula, Rafiqul Islam
Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the "ImageNet" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% F1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.
{"title":"ContourTL-Net: Contour-Based Transfer Learning Algorithm for Early-Stage Brain Tumor Detection.","authors":"N I Md Ashafuddula, Rafiqul Islam","doi":"10.1155/2024/6347920","DOIUrl":"10.1155/2024/6347920","url":null,"abstract":"<p><p>Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the \"ImageNet\" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% <i>F</i>1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"6347920"},"PeriodicalIF":7.6,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11074715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.
{"title":"A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance.","authors":"Wei-Wen Chen, Ling Kuo, Yi-Xun Lin, Wen-Chung Yu, Chien-Chao Tseng, Yenn-Jiang Lin, Ching-Chun Huang, Shih-Lin Chang, Jacky Chung-Hao Wu, Chun-Ku Chen, Ching-Yao Weng, Siwa Chan, Wei-Wen Lin, Yu-Cheng Hsieh, Ming-Chih Lin, Yun-Ching Fu, Tsung Chen, Shih-Ann Chen, Henry Horng-Shing Lu","doi":"10.1155/2024/6114826","DOIUrl":"https://doi.org/10.1155/2024/6114826","url":null,"abstract":"<p><p>A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an <i>F</i>1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an <i>F</i>1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"6114826"},"PeriodicalIF":7.6,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11068448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140867764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}