首页 > 最新文献

International Journal of Biomedical Imaging最新文献

英文 中文
The Blood-Brain Barrier in Both Humans and Rats: A Perspective From 3D Imaging. 人类和大鼠的血脑屏障:三维成像透视
IF 3.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-08-26 eCollection Date: 2024-01-01 DOI: 10.1155/2024/4482931
Aiwen Chen, Gavin Volpato, Alice Pong, Emma Schofield, Jun Huang, Zizhao Qiu, George Paxinos, Huazheng Liang

Background: The blood-brain barrier (BBB) is part of the neurovascular unit (NVU) which plays a key role in maintaining homeostasis. However, its 3D structure is hardly known. The present study is aimed at imaging the BBB using tissue clearing and 3D imaging techniques in both human brain tissue and rat brain tissue. Methods: Both human and rat brain tissue were cleared using the CUBIC technique and imaged with either a confocal or two-photon microscope. Image stacks were reconstructed using Imaris. Results: Double staining with various antibodies targeting endothelial cells, basal membrane, pericytes of blood vessels, microglial cells, and the spatial relationship between astrocytes and blood vessels showed that endothelial cells do not evenly express CD31 and Glut1 transporter in the human brain. Astrocytes covered only a small portion of the vessels as shown by the overlap between GFAP-positive astrocytes and Collagen IV/CD31-positive endothelial cells as well as between GFAP-positive astrocytes and CD146-positive pericytes, leaving a big gap between their end feet. A similar structure was observed in the rat brain. Conclusions: The present study demonstrated the 3D structure of both the human and rat BBB, which is discrepant from the 2D one. Tissue clearing and 3D imaging are promising techniques to answer more questions about the real structure of biological specimens.

背景:血脑屏障(BBB)是神经血管单元(NVU)的一部分,在维持体内平衡方面发挥着关键作用。然而,人们对其三维结构知之甚少。本研究旨在利用组织清除和三维成像技术对人脑组织和大鼠脑组织中的血脑屏障进行成像。研究方法使用 CUBIC 技术清除人脑和大鼠脑组织,并使用共聚焦显微镜或双光子显微镜成像。使用 Imaris 重建图像堆栈。结果用针对内皮细胞、基底膜、血管周细胞、小胶质细胞以及星形胶质细胞和血管之间空间关系的各种抗体进行双重染色,结果显示人脑内皮细胞并不均匀表达 CD31 和 Glut1 转运体。从 GFAP 阳性星形胶质细胞与胶原 IV/CD31 阳性内皮细胞之间以及 GFAP 阳性星形胶质细胞与 CD146 阳性周细胞之间的重叠可以看出,星形胶质细胞只覆盖了血管的一小部分,在它们的端脚之间留下了很大的空隙。在大鼠大脑中也观察到了类似的结构。结论本研究展示了人类和大鼠 BBB 的三维结构,这与二维结构有所不同。组织清除和三维成像技术有望回答更多有关生物标本真实结构的问题。
{"title":"The Blood-Brain Barrier in Both Humans and Rats: A Perspective From 3D Imaging.","authors":"Aiwen Chen, Gavin Volpato, Alice Pong, Emma Schofield, Jun Huang, Zizhao Qiu, George Paxinos, Huazheng Liang","doi":"10.1155/2024/4482931","DOIUrl":"10.1155/2024/4482931","url":null,"abstract":"<p><p><b>Background:</b> The blood-brain barrier (BBB) is part of the neurovascular unit (NVU) which plays a key role in maintaining homeostasis. However, its 3D structure is hardly known. The present study is aimed at imaging the BBB using tissue clearing and 3D imaging techniques in both human brain tissue and rat brain tissue. <b>Methods:</b> Both human and rat brain tissue were cleared using the CUBIC technique and imaged with either a confocal or two-photon microscope. Image stacks were reconstructed using Imaris. <b>Results:</b> Double staining with various antibodies targeting endothelial cells, basal membrane, pericytes of blood vessels, microglial cells, and the spatial relationship between astrocytes and blood vessels showed that endothelial cells do not evenly express CD31 and Glut1 transporter in the human brain. Astrocytes covered only a small portion of the vessels as shown by the overlap between GFAP-positive astrocytes and Collagen IV/CD31-positive endothelial cells as well as between GFAP-positive astrocytes and CD146-positive pericytes, leaving a big gap between their end feet. A similar structure was observed in the rat brain. <b>Conclusions:</b> The present study demonstrated the 3D structure of both the human and rat BBB, which is discrepant from the 2D one. Tissue clearing and 3D imaging are promising techniques to answer more questions about the real structure of biological specimens.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11368551/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presegmenter Cascaded Framework for Mammogram Mass Segmentation. 用于乳房 X 线照片肿块分割的预分割级联框架
IF 3.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-08-09 eCollection Date: 2024-01-01 DOI: 10.1155/2024/9422083
Urvi Oza, Bakul Gohel, Pankaj Kumar, Parita Oza

Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.

准确分割乳房 X 光图像中的乳房肿块对早期癌症诊断和治疗计划至关重要。目前已提出了几种深度学习(DL)模型,用于整个乳房X光照片分割和肿块斑块/作物分割。然而,目前用于乳房X光照片肿块分割的深度学习模型面临着一些局限性,包括假阳性(FP)、假阴性(FN)以及端到端方法的挑战。本文提出了一种新颖的两阶段端到端级联乳腺肿块分割框架,该框架结合了潜在肿块区域的显著性地图来指导乳腺肿块分割的 DL 模型。级联框架的第一阶段分割模型用于生成显著性地图,以建立粗略的兴趣区域(ROI),从而有效地将焦点缩小到可能的肿块区域。在第二阶段的分割模型中引入了建议的前分区注意(PSA)块,以便根据生成的显著性地图动态适应乳房 X 光图像中信息量最大的区域。在骰子分数、精确度、召回率、FP 率 (FPR) 和 FN 结果方面,对有无级联框架的注意力 U 网模型进行了比较分析。实验结果一致表明,所提出的级联框架在所有三个数据集上都提高了乳房肿块的分割性能:INbreast、CSAW-S 和 DMID。级联框架显示出卓越的分割性能,在 INbreast 数据集上,骰子得分提高了约 6%,在 CSAW-S 数据集上提高了 3%,在 DMID 数据集上提高了 2%。同样,INbreast 数据集的 FN 结果降低了 10%,CSAW-S 数据集降低了 19%,DMID 数据集降低了 4%。此外,DeepLabV3+ 和 Swin transformer U-net 等各种最先进的分割模型也验证了所提出的级联框架的性能。无论选择哪种模型,预分割级联框架与任何医学图像分割框架集成后,都有可能提高分割性能并减少 FN。
{"title":"Presegmenter Cascaded Framework for Mammogram Mass Segmentation.","authors":"Urvi Oza, Bakul Gohel, Pankaj Kumar, Parita Oza","doi":"10.1155/2024/9422083","DOIUrl":"10.1155/2024/9422083","url":null,"abstract":"<p><p>Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":3.3,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329304/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An End-to-End CRSwNP Prediction with Multichannel ResNet on Computed Tomography. 利用多通道 ResNet 对计算机断层扫描进行端到端 CRSwNP 预测。
IF 7.6 Q1 Medicine Pub Date : 2024-06-06 eCollection Date: 2024-01-01 DOI: 10.1155/2024/4960630
Shixin Lai, Weipiao Kang, Yaowen Chen, Jisheng Zou, Siqi Wang, Xuan Zhang, Xiaolei Zhang, Yu Lin

Chronic rhinosinusitis (CRS) is a global disease characterized by poor treatment outcomes and high recurrence rates, significantly affecting patients' quality of life. Due to its complex pathophysiology and diverse clinical presentations, CRS is categorized into various subtypes to facilitate more precise diagnosis, treatment, and prognosis prediction. Among these, CRS with nasal polyps (CRSwNP) is further divided into eosinophilic CRSwNP (eCRSwNP) and noneosinophilic CRSwNP (non-eCRSwNP). However, there is a lack of precise predictive diagnostic and treatment methods, making research into accurate diagnostic techniques for CRSwNP endotypes crucial for achieving precision medicine in CRSwNP. This paper proposes a method using multiangle sinus computed tomography (CT) images combined with artificial intelligence (AI) to predict CRSwNP endotypes, distinguishing between patients with eCRSwNP and non-eCRSwNP. The considered dataset comprises 22,265 CT images from 192 CRSwNP patients, including 13,203 images from non-eCRSwNP patients and 9,062 images from eCRSwNP patients. Test results from the network model demonstrate that multiangle images provide more useful information for the network, achieving an accuracy of 98.43%, precision of 98.1%, recall of 98.1%, specificity of 98.7%, and an AUC value of 0.984. Compared to the limited learning capacity of single-channel neural networks, our proposed multichannel feature adaptive fusion model captures multiscale spatial features, enhancing the model's focus on crucial sinus information within the CT images to maximize detection accuracy. This deep learning-based diagnostic model for CRSwNP endotypes offers excellent classification performance, providing a noninvasive method for accurately predicting CRSwNP endotypes before treatment and paving the way for precision medicine in the new era of CRSwNP.

慢性鼻炎(CRS)是一种全球性疾病,其特点是治疗效果差、复发率高,严重影响患者的生活质量。由于其复杂的病理生理学和多样的临床表现,CRS 被分为多种亚型,以便于进行更精确的诊断、治疗和预后预测。其中,伴有鼻息肉的 CRS(CRSwNP)又分为嗜酸性 CRSwNP(eCRSwNP)和非嗜酸性 CRSwNP(non-eCRSwNP)。然而,目前还缺乏精确的预测性诊断和治疗方法,因此研究 CRSwNP 内型的精确诊断技术对于实现 CRSwNP 的精准医疗至关重要。本文提出了一种利用多角度鼻窦计算机断层扫描(CT)图像结合人工智能(AI)预测 CRSwNP 内型的方法,以区分 eCRSwNP 和非 eCRSwNP 患者。所考虑的数据集包括来自 192 名 CRSwNP 患者的 22,265 张 CT 图像,其中 13,203 张来自非 eCRSwNP 患者,9,062 张来自 eCRSwNP 患者。网络模型的测试结果表明,多角度图像能为网络提供更多有用信息,准确率达到 98.43%,精确率达到 98.1%,召回率达到 98.1%,特异性达到 98.7%,AUC 值达到 0.984。与单通道神经网络有限的学习能力相比,我们提出的多通道特征自适应融合模型能捕捉多尺度空间特征,提高模型对CT图像中关键窦道信息的关注度,从而最大限度地提高检测准确率。这种基于深度学习的 CRSwNP 内型诊断模型具有出色的分类性能,为治疗前准确预测 CRSwNP 内型提供了一种无创方法,为新时代的 CRSwNP 精准医疗铺平了道路。
{"title":"An End-to-End CRSwNP Prediction with Multichannel ResNet on Computed Tomography.","authors":"Shixin Lai, Weipiao Kang, Yaowen Chen, Jisheng Zou, Siqi Wang, Xuan Zhang, Xiaolei Zhang, Yu Lin","doi":"10.1155/2024/4960630","DOIUrl":"10.1155/2024/4960630","url":null,"abstract":"<p><p>Chronic rhinosinusitis (CRS) is a global disease characterized by poor treatment outcomes and high recurrence rates, significantly affecting patients' quality of life. Due to its complex pathophysiology and diverse clinical presentations, CRS is categorized into various subtypes to facilitate more precise diagnosis, treatment, and prognosis prediction. Among these, CRS with nasal polyps (CRSwNP) is further divided into eosinophilic CRSwNP (eCRSwNP) and noneosinophilic CRSwNP (non-eCRSwNP). However, there is a lack of precise predictive diagnostic and treatment methods, making research into accurate diagnostic techniques for CRSwNP endotypes crucial for achieving precision medicine in CRSwNP. This paper proposes a method using multiangle sinus computed tomography (CT) images combined with artificial intelligence (AI) to predict CRSwNP endotypes, distinguishing between patients with eCRSwNP and non-eCRSwNP. The considered dataset comprises 22,265 CT images from 192 CRSwNP patients, including 13,203 images from non-eCRSwNP patients and 9,062 images from eCRSwNP patients. Test results from the network model demonstrate that multiangle images provide more useful information for the network, achieving an accuracy of 98.43%, precision of 98.1%, recall of 98.1%, specificity of 98.7%, and an AUC value of 0.984. Compared to the limited learning capacity of single-channel neural networks, our proposed multichannel feature adaptive fusion model captures multiscale spatial features, enhancing the model's focus on crucial sinus information within the CT images to maximize detection accuracy. This deep learning-based diagnostic model for CRSwNP endotypes offers excellent classification performance, providing a noninvasive method for accurately predicting CRSwNP endotypes before treatment and paving the way for precision medicine in the new era of CRSwNP.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In Situ Immunofluorescence Imaging of Vital Human Pancreatic Tissue Using Fiber-Optic Microscopy. 利用光纤显微镜对重要的人体胰腺组织进行原位免疫荧光成像。
IF 7.6 Q1 Medicine Pub Date : 2024-06-06 eCollection Date: 2024-01-01 DOI: 10.1155/2024/1397875
Sophia Ackermann, Maximilian Herold, Vincent Rohrbacher, Michael Schäfer, Marcell Tóth, Stefan Thomann, Thilo Hackert, Eduard Ryschich

Purpose: Surgical resection is the only curative option for pancreatic carcinoma, but disease-free and overall survival times after surgery are limited due to early tumor recurrence, most often originating from local microscopic tumor residues (R1 resection). The intraoperative identification of microscopic tumor residues within the resection margin in situ could improve surgical performance. The aim of this study was to evaluate the effectiveness of fiber-optic microscopy for detecting microscopic residues in vital pancreatic cancer tissues. Experimental Design. Fresh whole-mount human pancreatic tissues, histological tissue slides, cell culture, and chorioallantoic membrane xenografts were analyzed. Specimens were stained with selected fluorophore-conjugated antibodies and studied using conventional wide-field and self-designed multicolor fiber-optic fluorescence microscopy instruments.

Results: Whole-mount vital human tissues and xenografts were stained and imaged using an in situ immunofluorescence protocol. Fiber-optic microscopy enabled the detection of epitope-based fluorescence in vital whole-mount tissue using fluorophore-conjugated antibodies and enabled visualization of microvascular, epithelial, and malignant tumor cells. Among the selected antigen-antibody pairs, antibody clones WM59, AY13, and 9C4 were the most promising for fiber-optic imaging in human tissue samples and for endothelial, tumor and epithelial cell detection.

Conclusions: Fresh dissected whole-mount tissue can be stained using direct exposure to selected antibody clones. Several antibody clones were identified that provided excellent immunofluorescence imaging of labeled structures, such as endothelial, epithelial, or EGFR-expressing cells. The combination of in situ immunofluorescence staining and fiber-optic microscopy visualizes structures in vital tissues and could be proposed as an useful tool for the in situ identification of residual tumor mass in patients with a high operative risk for incomplete resection.

目的:手术切除是根治胰腺癌的唯一选择,但由于肿瘤早期复发,术后无病生存期和总生存期受到限制,而肿瘤早期复发多源于局部微小肿瘤残留(R1切除)。术中原位识别切除边缘内的微小肿瘤残留可提高手术效果。本研究旨在评估光纤显微镜检测重要胰腺癌组织中微小残留物的有效性。实验设计。对新鲜的整张人体胰腺组织、组织切片、细胞培养物和绒毛膜异种移植体进行分析。标本用选定的荧光团结合抗体染色,并使用传统的宽视野和自行设计的多色光纤荧光显微镜仪器进行研究:使用原位免疫荧光方案对整块重要人体组织和异种移植物进行染色和成像。光纤显微镜能利用荧光团结合的抗体检测活体整装组织中的表位荧光,并能观察到微血管、上皮细胞和恶性肿瘤细胞。在所选的抗原-抗体对中,抗体克隆 WM59、AY13 和 9C4 最有希望在人体组织样本中进行光纤成像,并用于内皮细胞、肿瘤细胞和上皮细胞的检测:结论:新鲜解剖的整块组织可直接暴露于选定的抗体克隆进行染色。结论:直接暴露于选定的抗体克隆可对新鲜的解剖全贴面组织进行染色,已确定的几个抗体克隆可对标记结构(如内皮细胞、上皮细胞或表皮生长因子受体表达细胞)进行出色的免疫荧光成像。原位免疫荧光染色与光纤显微镜相结合,可观察到重要组织中的结构,可作为一种有用的工具,用于在手术风险较高且切除不彻底的患者中原位识别残余肿瘤块。
{"title":"<i>In Situ</i> Immunofluorescence Imaging of Vital Human Pancreatic Tissue Using Fiber-Optic Microscopy.","authors":"Sophia Ackermann, Maximilian Herold, Vincent Rohrbacher, Michael Schäfer, Marcell Tóth, Stefan Thomann, Thilo Hackert, Eduard Ryschich","doi":"10.1155/2024/1397875","DOIUrl":"10.1155/2024/1397875","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical resection is the only curative option for pancreatic carcinoma, but disease-free and overall survival times after surgery are limited due to early tumor recurrence, most often originating from local microscopic tumor residues (R1 resection). The intraoperative identification of microscopic tumor residues within the resection margin <i>in situ</i> could improve surgical performance. The aim of this study was to evaluate the effectiveness of fiber-optic microscopy for detecting microscopic residues in vital pancreatic cancer tissues. <i>Experimental Design</i>. Fresh whole-mount human pancreatic tissues, histological tissue slides, cell culture, and chorioallantoic membrane xenografts were analyzed. Specimens were stained with selected fluorophore-conjugated antibodies and studied using conventional wide-field and self-designed multicolor fiber-optic fluorescence microscopy instruments.</p><p><strong>Results: </strong>Whole-mount vital human tissues and xenografts were stained and imaged using an <i>in situ</i> immunofluorescence protocol. Fiber-optic microscopy enabled the detection of epitope-based fluorescence in vital whole-mount tissue using fluorophore-conjugated antibodies and enabled visualization of microvascular, epithelial, and malignant tumor cells. Among the selected antigen-antibody pairs, antibody clones WM59, AY13, and 9C4 were the most promising for fiber-optic imaging in human tissue samples and for endothelial, tumor and epithelial cell detection.</p><p><strong>Conclusions: </strong>Fresh dissected whole-mount tissue can be stained using direct exposure to selected antibody clones. Several antibody clones were identified that provided excellent immunofluorescence imaging of labeled structures, such as endothelial, epithelial, or EGFR-expressing cells. The combination of <i>in situ</i> immunofluorescence staining and fiber-optic microscopy visualizes structures in vital tissues and could be proposed as an useful tool for the <i>in situ</i> identification of residual tumor mass in patients with a high operative risk for incomplete resection.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVID-19 Detection from Computed Tomography Images Using Slice Processing Techniques and a Modified Xception Classifier. 利用切片处理技术和改进的 Xception 分类器从计算机断层扫描图像中检测 COVID-19。
IF 7.6 Q1 Medicine Pub Date : 2024-05-24 eCollection Date: 2024-01-01 DOI: 10.1155/2024/9962839
Kenan Morani, Esra Kaya Ayana, Dimitrios Kollias, Devrim Unay

This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images using a lean transfer learning-based model. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 × 224) were input into an Xception transfer learning model with a modified output. Both Xception's architecture and pretrained weights were leveraged in the method. A big and rigorously annotated database of CT images was used to verify the method. The number of patients/subjects in the dataset is more than 5000, and the number and shape of the slices in each CT scan varies greatly. Verification was made both on the validation partition and on the test partition of unseen images. Results on the COV19-CT database showcased not only improvement from our previous solution and the baseline but also comparable performance to the highest-achieving methods on the same dataset. Further validation studies could explore the scalability and adaptability of the developed methodologies across diverse healthcare settings and patient populations. Additionally, investigating the integration of advanced image processing techniques, such as automated region of interest detection and segmentation algorithms, could enhance the efficiency and accuracy of COVID-19 diagnosis.

本文扩展了之前的 COVID-19 诊断方法,提出了一种基于精益迁移学习模型的增强型解决方案,用于从计算机断层扫描(CT)图像中检测 COVID-19。为了减少模型分类错误,我们采用了两个关键的图像处理步骤。首先,去除最上方和最下方的切片,保留每位患者 60% 的切片。其次,对所有切片进行手动裁剪,以突出肺部区域。随后,将调整后的 CT 扫描图像(224 × 224)输入 Xception 转移学习模型,并修改输出结果。该方法利用了 Xception 的架构和预训练权重。为了验证该方法,我们使用了一个大型且经过严格注释的 CT 图像数据库。数据集中的患者/受试者数量超过 5000 人,且每张 CT 扫描图像的切片数量和形状差异很大。验证既在验证分区上进行,也在未见图像的测试分区上进行。在 COV19-CT 数据库上的结果表明,该方法不仅比我们以前的解决方案和基线方法有所改进,而且在同一数据集上的性能也可与成绩最好的方法媲美。进一步的验证研究可以探索所开发方法在不同医疗环境和患者群体中的可扩展性和适应性。此外,研究先进的图像处理技术(如自动兴趣区检测和分割算法)的整合也能提高 COVID-19 诊断的效率和准确性。
{"title":"COVID-19 Detection from Computed Tomography Images Using Slice Processing Techniques and a Modified Xception Classifier.","authors":"Kenan Morani, Esra Kaya Ayana, Dimitrios Kollias, Devrim Unay","doi":"10.1155/2024/9962839","DOIUrl":"10.1155/2024/9962839","url":null,"abstract":"<p><p>This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images using a lean transfer learning-based model. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 × 224) were input into an Xception transfer learning model with a modified output. Both Xception's architecture and pretrained weights were leveraged in the method. A big and rigorously annotated database of CT images was used to verify the method. The number of patients/subjects in the dataset is more than 5000, and the number and shape of the slices in each CT scan varies greatly. Verification was made both on the validation partition and on the test partition of unseen images. Results on the COV19-CT database showcased not only improvement from our previous solution and the baseline but also comparable performance to the highest-achieving methods on the same dataset. Further validation studies could explore the scalability and adaptability of the developed methodologies across diverse healthcare settings and patient populations. Additionally, investigating the integration of advanced image processing techniques, such as automated region of interest detection and segmentation algorithms, could enhance the efficiency and accuracy of COVID-19 diagnosis.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction. 用斯温变换器和 Unet 架构纠正磁共振图像重建中的运动伪影
IF 7.6 Q1 Medicine Pub Date : 2024-05-02 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8972980
Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim

We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.

我们提出了一种基于深度学习的方法,它能纠正运动伪影,从而加速磁共振图像的数据采集和重建。这个名为 "Swin 网络运动伪影校正"(MACS-Net)的新模型使用 Swin 变换器层作为基本模块,Unet 架构作为神经网络骨干。在编码过程中,我们采用带有移位窗口的分层变换器来提取多尺度上下文特征。在基于 Swin 变换器的解码层中,我们采用了一种新的双重上采样技术来提高特征图的空间分辨率。原始磁共振成像数据集用于网络训练和测试;数据包含各种运动伪影和相同受试者的地面实况图像。使用两种类型的运动,将结果与六种最先进的磁共振成像运动校正方法进行了比较。当运动时间较短时(5 秒内),该方法将平均归一化均方根误差(NRMSE)从 45.25% 降低到 17.51%,将平均结构相似性指数(SSIM)从 79.43% 提高到 91.72%,将峰值信噪比(PSNR)从 18.24 dB 提高到 26.57 dB。同样,当运动时间从 5 秒延长到 10 秒时,我们的方法将平均 NRMSE 从 60.30% 降低到 21.04%,将平均 SSIM 从 33.86% 提高到 90.33%,将 PSNR 从 15.64 dB 提高到 24.99 dB。校正图像和无运动大脑数据的解剖结构相似。
{"title":"Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction.","authors":"Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim","doi":"10.1155/2024/8972980","DOIUrl":"10.1155/2024/8972980","url":null,"abstract":"<p><p>We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11081754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ContourTL-Net: Contour-Based Transfer Learning Algorithm for Early-Stage Brain Tumor Detection. ContourTL-Net:基于轮廓的转移学习算法,用于早期脑肿瘤检测。
IF 7.6 Q1 Medicine Pub Date : 2024-04-29 eCollection Date: 2024-01-01 DOI: 10.1155/2024/6347920
N I Md Ashafuddula, Rafiqul Islam

Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the "ImageNet" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% F1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.

脑肿瘤是一种严重的神经系统疾病,由大脑或头骨中不受控制的细胞生长引起,通常会导致死亡。随着患者寿命的延长,需要及时发现;然而,脑组织的复杂性使得早期诊断具有挑战性。因此,需要自动化工具来帮助医护人员。本研究尤其旨在通过深度学习模型提高临床环境中计算机化脑肿瘤检测的效率。因此,本研究提出了一种新型的基于阈值的磁共振成像图像分割方法和基于轮廓的迁移学习模型(ContourTL-Net),以促进脑部恶性肿瘤的初期临床检测。该模型利用基于轮廓的分析,这对物体检测、精确分割和捕捉肿瘤形态的细微变化至关重要。该模型采用 VGG-16 架构,事先在 "ImageNet "集合上进行了特征提取和分类训练。该模型旨在利用其 10 个不可训练卷积层、3 个可训练卷积层和 3 个剔除层。所提出的 ContourTL-Net 模型在两个基准数据集上以四种方式进行了评估,其中未见病例被视为临床方面。在未见数据上验证深度学习模型对于确定模型的泛化能力、领域适应性、鲁棒性和实际应用性至关重要。在这里,所介绍模型的结果表明,对未见数据的分类非常准确,灵敏度和阴性预测值(NPV)均为 100%,特异性为 98.60%,精确度为 99.12%,F1 分数为 99.56%,准确率为 99.46%。此外,还将建议模型的结果与最先进的方法进行了比较,以进一步提高其有效性。建议的解决方案在可见数据和未见数据方面都优于现有解决方案,有望显著提高脑肿瘤检测效率和准确性,从而提早诊断并改善患者预后。
{"title":"ContourTL-Net: Contour-Based Transfer Learning Algorithm for Early-Stage Brain Tumor Detection.","authors":"N I Md Ashafuddula, Rafiqul Islam","doi":"10.1155/2024/6347920","DOIUrl":"10.1155/2024/6347920","url":null,"abstract":"<p><p>Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the \"ImageNet\" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% <i>F</i>1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11074715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance. 利用心脏磁共振成像技术对法布里心肌病和肥厚型心肌病进行分类的深度学习方法。
IF 7.6 Q1 Medicine Pub Date : 2024-04-26 eCollection Date: 2024-01-01 DOI: 10.1155/2024/6114826
Wei-Wen Chen, Ling Kuo, Yi-Xun Lin, Wen-Chung Yu, Chien-Chao Tseng, Yenn-Jiang Lin, Ching-Chun Huang, Shih-Lin Chang, Jacky Chung-Hao Wu, Chun-Ku Chen, Ching-Yao Weng, Siwa Chan, Wei-Wen Lin, Yu-Cheng Hsieh, Ming-Chih Lin, Yun-Ching Fu, Tsung Chen, Shih-Ann Chen, Henry Horng-Shing Lu

A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.

准确识别和分类左心室肥厚(LVH)的一个难题是将其与肥厚型心肌病(HCM)和法布里病区分开来。对成像技术的依赖往往需要多位专家的专业知识,包括心脏病专家、放射科专家和遗传学家。对 LVH 的解释和分类存在差异,导致诊断结果不一致。左心室肥厚、HCM 和法布里心肌病可通过心脏磁共振成像(MRI)的 T1 映射加以区分。然而,对于心脏病专家来说,使用超声心动图或核磁共振成像电影图像区分 HCM 和法布里心肌病具有挑战性。我们提出的核磁共振短轴左心室肥厚分类器(MSLVHC)系统是一个利用人工智能开发的高准确度标准化成像分类模型,并在核磁共振短轴(SAX)视图电影图像上进行训练,以区分 HCM 和法布里病。在台北荣民总医院(TVGH)数据集上进行测试时,该模型取得了令人印象深刻的性能,F1 分数为 0.846,准确率为 0.909,AUC 为 0.914。此外,利用台中荣民总医院(TCVGH)的数据进行的单盲研究和外部测试也证明了该模型的可靠性和有效性,其F1分数为0.727,准确率为0.806,AUC为0.918,证明了该模型的可靠性和实用性。该人工智能模型有望成为协助专家诊断左心室肥大疾病的重要工具。
{"title":"A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance.","authors":"Wei-Wen Chen, Ling Kuo, Yi-Xun Lin, Wen-Chung Yu, Chien-Chao Tseng, Yenn-Jiang Lin, Ching-Chun Huang, Shih-Lin Chang, Jacky Chung-Hao Wu, Chun-Ku Chen, Ching-Yao Weng, Siwa Chan, Wei-Wen Lin, Yu-Cheng Hsieh, Ming-Chih Lin, Yun-Ching Fu, Tsung Chen, Shih-Ann Chen, Henry Horng-Shing Lu","doi":"10.1155/2024/6114826","DOIUrl":"https://doi.org/10.1155/2024/6114826","url":null,"abstract":"<p><p>A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an <i>F</i>1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an <i>F</i>1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11068448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140867764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In Vivo Detection of Staphylococcus aureus Infections Using Radiolabeled Antibodies Specific for Bacterial Toxins 使用放射性标记的细菌毒素特异性抗体检测金黄色葡萄球菌感染的体内情况
IF 7.6 Q1 Medicine Pub Date : 2024-04-18 DOI: 10.1155/2024/3655327
M. I. Gonzalez, M. González-Arjona, L. Cussó, Miguel Ángel Morcillo, J. Aguilera-Correa, Jaime Esteban, M. Kestler, Daniel Calle, Carlos Cerón, Marta Cortes-Canteli, Patricia Muñoz, Emilio Bouza, Manuel Desco, Beatriz Salinas
Purpose The Gram-positive Staphylococcus aureus bacterium is one of the leading causes of infection in humans. The lack of specific noninvasive techniques for diagnosis of staphylococcal infection together with the severity of its associated complications support the need for new specific and selective diagnostic tools. This work presents the successful synthesis of an immunotracer that targets the α-toxin released by S. aureus. Methods [89Zr]Zr-DFO-ToxAb was synthesized based on radiolabeling an anti-α-toxin antibody with zirconium-89. The physicochemical characterization of the immunotracer was performed by high-performance liquid chromatography (HPLC), radio-thin layer chromatography (radio-TLC), and electrophoretic analysis. Its diagnostic ability was evaluated in vivo by positron emission tomography/computed tomography (PET/CT) imaging in an animal model of local infection-inflammation (active S. aureus vs. heat-killed S. aureus) and infective osteoarthritis. Results Chemical characterization of the tracer established the high radiochemical yield and purity of the tracer while maintaining antibody integrity. In vivo PET/CT image confirmed the ability of the tracer to detect active foci of S. aureus. Those results were supported by ex vivo biodistribution studies, autoradiography, and histology, which confirmed the ability of [89Zr]Zr-DFO-ToxAb to detect staphylococcal infectious foci, avoiding false-positives derived from inflammatory processes. Conclusions We have developed an immuno-PET tracer capable of detecting S. aureus infections based on a radiolabeled antibody specific for the staphylococcal alpha toxins. The in vivo assessment of [89Zr]Zr-DFO-ToxAb confirmed its ability to selectively detect staphylococcal infectious foci, allowing us to discern between infectious and inflammatory processes.
目的 革兰氏阳性金黄色葡萄球菌是导致人类感染的主要原因之一。由于缺乏诊断金黄色葡萄球菌感染的特异性非侵入性技术,加上其相关并发症的严重性,因此需要新的特异性和选择性诊断工具。本研究成功合成了一种针对金黄色葡萄球菌释放的α毒素的免疫示踪剂。方法 [89Zr]Zr-DFO-ToxAb 是在用锆-89 对抗α-毒素抗体进行放射性标记的基础上合成的。通过高效液相色谱法(HPLC)、放射性薄层色谱法(radio-TLC)和电泳分析对该免疫示踪剂进行了物理化学表征。通过正电子发射断层扫描/计算机断层扫描(PET/CT)成像,在局部感染-炎症(活性金黄色葡萄球菌与热处理杀死的金黄色葡萄球菌)和感染性骨关节炎动物模型中评估了该示踪剂的体内诊断能力。结果 示踪剂的化学特性确定了示踪剂的高放射化学收率和纯度,同时保持了抗体的完整性。体内 PET/CT 图像证实了示踪剂检测金黄色葡萄球菌活动病灶的能力。这些结果得到了体内外生物分布研究、自显放射学和组织学的支持,证实了[89Zr]Zr-DFO-ToxAb 能够检测到葡萄球菌感染灶,避免了炎症过程产生的假阳性。结论 我们开发出了一种能够检测金黄色葡萄球菌感染的免疫 PET 示踪剂,该示踪剂基于针对金黄色葡萄球菌α毒素的特异性放射性标记抗体。对[89Zr]Zr-DFO-ToxAb 的体内评估证实,它能够选择性地检测出葡萄球菌感染灶,使我们能够区分感染过程和炎症过程。
{"title":"In Vivo Detection of Staphylococcus aureus Infections Using Radiolabeled Antibodies Specific for Bacterial Toxins","authors":"M. I. Gonzalez, M. González-Arjona, L. Cussó, Miguel Ángel Morcillo, J. Aguilera-Correa, Jaime Esteban, M. Kestler, Daniel Calle, Carlos Cerón, Marta Cortes-Canteli, Patricia Muñoz, Emilio Bouza, Manuel Desco, Beatriz Salinas","doi":"10.1155/2024/3655327","DOIUrl":"https://doi.org/10.1155/2024/3655327","url":null,"abstract":"Purpose The Gram-positive Staphylococcus aureus bacterium is one of the leading causes of infection in humans. The lack of specific noninvasive techniques for diagnosis of staphylococcal infection together with the severity of its associated complications support the need for new specific and selective diagnostic tools. This work presents the successful synthesis of an immunotracer that targets the α-toxin released by S. aureus. Methods [89Zr]Zr-DFO-ToxAb was synthesized based on radiolabeling an anti-α-toxin antibody with zirconium-89. The physicochemical characterization of the immunotracer was performed by high-performance liquid chromatography (HPLC), radio-thin layer chromatography (radio-TLC), and electrophoretic analysis. Its diagnostic ability was evaluated in vivo by positron emission tomography/computed tomography (PET/CT) imaging in an animal model of local infection-inflammation (active S. aureus vs. heat-killed S. aureus) and infective osteoarthritis. Results Chemical characterization of the tracer established the high radiochemical yield and purity of the tracer while maintaining antibody integrity. In vivo PET/CT image confirmed the ability of the tracer to detect active foci of S. aureus. Those results were supported by ex vivo biodistribution studies, autoradiography, and histology, which confirmed the ability of [89Zr]Zr-DFO-ToxAb to detect staphylococcal infectious foci, avoiding false-positives derived from inflammatory processes. Conclusions We have developed an immuno-PET tracer capable of detecting S. aureus infections based on a radiolabeled antibody specific for the staphylococcal alpha toxins. The in vivo assessment of [89Zr]Zr-DFO-ToxAb confirmed its ability to selectively detect staphylococcal infectious foci, allowing us to discern between infectious and inflammatory processes.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140686921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super High Contrast USPIO-Enhanced Cerebrovascular Angiography Using Ultrashort Time-to-Echo MRI 利用超短回波时间磁共振成像进行超高对比度 USPIO 增强脑血管血管造影术
IF 7.6 Q1 Medicine Pub Date : 2024-04-13 DOI: 10.1155/2024/9763364
Liam Timms, Tianyi Zhou, J. Qiao, Codi A. Gharagouzloo, Vishala Mishra, R. Lahoud, John W. Chen, Mukesh Harisinghani, Srinivas Sridhar
Background Ferumoxytol (Ferahame, AMAG Pharmaceuticals, Waltham, MA) is increasingly used off-label as an MR contrast agent due to its relaxivity and safety profiles. However, its potent T2∗ relaxivity limits achievable T1-weighted positive contrast and leads to artifacts in standard MRI protocols. Optimization of protocols for ferumoxytol deployment is necessary to realize its potential. Methods We present first-in-human clinical results of the Quantitative Ultrashort Time-to-Echo Contrast Enhanced (QUTE-CE) MRA technique using the superparamagnetic iron oxide nanoparticle agent ferumoxytol for vascular imaging of the head/brain in 15 subjects at 3.0T. The QUTE-CE MRA method was implemented on a 3T scanner using a stack-of-spirals 3D Ultrashort Time-to-Echo sequence. Time-of-flight MRA and standard TE T1-weighted (T1w) images were also collected. For comparison, gadolinium-enhanced blood pool phase images were obtained retrospectively from clinical practice. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and intraluminal signal heterogeneity (ISH) were assessed and compared across approaches with Welch's two-sided t-test. Results Fifteen volunteers (54 ± 17 years old, 9 women) participated. QUTE-CE MRA provided high-contrast snapshots of the arterial and venous networks with lower intraluminal heterogeneity. QUTE-CE demonstrated significantly higher SNR (1707 ± 226), blood-tissue CNR (1447 ± 189), and lower ISH (0.091 ± 0.031) compared to ferumoxytol T1-weighted (551 ± 171; 319 ± 144; 0.186 ± 0.066, respectively) and time-of-flight (343 ± 104; 269 ± 82; 0.190 ± 0.016, respectively), with p < 0.001 in each comparison. The high CNR increased the depth of vessel visualization. Vessel lumina were captured with lower heterogeneity. Conclusion Quantitative Ultrashort Time-to-Echo Contrast-Enhanced MR angiography provides approximately 5-fold superior contrast with fewer artifacts compared to other contrast-enhanced vascular imaging techniques using ferumoxytol or gadolinium, and to noncontrast time-of-flight MR angiography, for clinical vascular imaging. This trial is registered with NCT03266848.
背景 Ferumoxytol(Ferahame,AMAG 制药公司,马萨诸塞州沃尔瑟姆)因其弛豫性和安全性,越来越多地被用作非标签 MR 造影剂。然而,其强大的 T2∗弛豫性限制了可实现的 T1 加权正向对比,并导致标准 MRI 方案中出现伪影。因此,有必要优化铁氧体部署方案,以发挥其潜力。方法 我们首次展示了在 3.0T 下使用超顺磁性氧化铁纳米粒子制剂铁氧体对 15 名受试者的头部/脑部血管成像进行定量超短时间到回波对比度增强(QUTE-CE)MRA 技术的人体临床结果。QUTE-CE MRA 方法是在 3T 扫描仪上使用螺旋堆叠三维超短时间回波序列实现的。同时还采集了飞行时间 MRA 和标准 TE T1 加权(T1w)图像。为了进行比较,还从临床实践中回顾性地获得了钆增强血池相位图像。评估信噪比(SNR)、对比度-信噪比(CNR)和管腔内信号异质性(ISH),并通过韦尔奇双侧 t 检验对不同方法进行比较。结果 15 名志愿者(54 ± 17 岁,9 名女性)参与了研究。QUTE-CE MRA 可提供高对比度的动脉和静脉网络快照,且管腔内异质性较低。与铁氧体 T1 加权(分别为 551 ± 171;319 ± 144;0.186 ± 0.066)和飞行时间(分别为 343 ± 104;269 ± 82;0.190 ± 0.016)相比,QUTE-CE 的信噪比(1707 ± 226)、血液-组织 CNR(1447 ± 189)和 ISH(0.091 ± 0.031)均明显更高,各项比较的 p 均小于 0.001。高 CNR 增加了血管可视化的深度。捕捉到的血管管腔异质性较低。结论 与其他使用铁氧体或钆的对比度增强血管成像技术以及非对比度飞行时间磁共振血管成像技术相比,定量超短回波对比度增强磁共振血管成像技术在临床血管成像中的对比度高出约 5 倍,伪影更少。该试验已注册为 NCT03266848。
{"title":"Super High Contrast USPIO-Enhanced Cerebrovascular Angiography Using Ultrashort Time-to-Echo MRI","authors":"Liam Timms, Tianyi Zhou, J. Qiao, Codi A. Gharagouzloo, Vishala Mishra, R. Lahoud, John W. Chen, Mukesh Harisinghani, Srinivas Sridhar","doi":"10.1155/2024/9763364","DOIUrl":"https://doi.org/10.1155/2024/9763364","url":null,"abstract":"Background Ferumoxytol (Ferahame, AMAG Pharmaceuticals, Waltham, MA) is increasingly used off-label as an MR contrast agent due to its relaxivity and safety profiles. However, its potent T2∗ relaxivity limits achievable T1-weighted positive contrast and leads to artifacts in standard MRI protocols. Optimization of protocols for ferumoxytol deployment is necessary to realize its potential. Methods We present first-in-human clinical results of the Quantitative Ultrashort Time-to-Echo Contrast Enhanced (QUTE-CE) MRA technique using the superparamagnetic iron oxide nanoparticle agent ferumoxytol for vascular imaging of the head/brain in 15 subjects at 3.0T. The QUTE-CE MRA method was implemented on a 3T scanner using a stack-of-spirals 3D Ultrashort Time-to-Echo sequence. Time-of-flight MRA and standard TE T1-weighted (T1w) images were also collected. For comparison, gadolinium-enhanced blood pool phase images were obtained retrospectively from clinical practice. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and intraluminal signal heterogeneity (ISH) were assessed and compared across approaches with Welch's two-sided t-test. Results Fifteen volunteers (54 ± 17 years old, 9 women) participated. QUTE-CE MRA provided high-contrast snapshots of the arterial and venous networks with lower intraluminal heterogeneity. QUTE-CE demonstrated significantly higher SNR (1707 ± 226), blood-tissue CNR (1447 ± 189), and lower ISH (0.091 ± 0.031) compared to ferumoxytol T1-weighted (551 ± 171; 319 ± 144; 0.186 ± 0.066, respectively) and time-of-flight (343 ± 104; 269 ± 82; 0.190 ± 0.016, respectively), with p < 0.001 in each comparison. The high CNR increased the depth of vessel visualization. Vessel lumina were captured with lower heterogeneity. Conclusion Quantitative Ultrashort Time-to-Echo Contrast-Enhanced MR angiography provides approximately 5-fold superior contrast with fewer artifacts compared to other contrast-enhanced vascular imaging techniques using ferumoxytol or gadolinium, and to noncontrast time-of-flight MR angiography, for clinical vascular imaging. This trial is registered with NCT03266848.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140707204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1