首页 > 最新文献

Journal of Imaging最新文献

英文 中文
Integrated Ultrasound Characterization of the Diet-Induced Obesity (DIO) Model in Young Adult c57bl/6j Mice: Assessment of Cardiovascular, Renal and Hepatic Changes. 年轻成年 c57bl/6j 小鼠饮食诱导肥胖 (DIO) 模型的综合超声表征:评估心血管、肾脏和肝脏的变化。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-04 DOI: 10.3390/jimaging10090217
Sara Gargiulo, Virginia Barone, Denise Bonente, Tiziana Tamborrino, Giovanni Inzalaco, Lisa Gherardini, Eugenio Bertelli, Mario Chiariello

Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in humans consuming a "Western-type" (WD) diet is of great interest. This study aims to characterize this animal model, using high-frequency ultrasound imaging (HFUS) as a complementary tool to longitudinally monitor changes in the liver, heart and kidney. Long-term WD feeding increased mice body weight (BW), liver/BW ratio and body condition score (BCS), transaminases, glucose and insulin, and caused dyslipidemia and insulin resistance. Echocardiography revealed subtle cardiac remodeling in WD-fed mice, highlighting a significant age-diet interaction for some left ventricular morphofunctional parameters. Qualitative and parametric HFUS analyses of the liver in WD-fed mice showed a progressive increase in echogenicity and echotexture heterogeneity, and equal or higher brightness of the renal cortex. Furthermore, renal circulation was impaired in WD-fed female mice. The ultrasound and histopathological findings were concordant. Overall, HFUS can improve the translational value of preclinical DIO models through an integrated approach with conventional methods, enabling a comprehensive identification of early stages of diseases in vivo and non-invasively, according to the 3Rs.

饮食不均衡和超重是男女青年和成年人的全球性健康问题,并可能导致代谢综合征。C57BL/6J小鼠亚系的饮食诱导肥胖(DIO)模型模仿了人类在摄入 "西式"(WD)饮食后体重逐渐增加的现象,因此备受关注。本研究旨在利用高频超声成像(HFUS)作为纵向监测肝脏、心脏和肾脏变化的辅助工具,描述这种动物模型的特征。长期饲喂 WD 会增加小鼠体重(BW)、肝脏/BW 比值和身体状况评分(BCS)、转氨酶、葡萄糖和胰岛素,并导致血脂异常和胰岛素抵抗。超声心动图显示,WD喂养的小鼠存在微妙的心脏重塑,突出显示了年龄与饮食对某些左心室形态功能参数的显著交互作用。对WD喂养小鼠肝脏的定性和参数HFUS分析表明,肝脏的回声性和回声纹理异质性逐渐增加,肾皮质的亮度相同或更高。此外,喂食 WD 的雌性小鼠的肾脏循环也受到了损害。超声和组织病理学结果一致。总之,HFUS 可通过与传统方法相结合的方法提高临床前 DIO 模型的转化价值,从而根据 3Rs 在体内无创地全面鉴定疾病的早期阶段。
{"title":"Integrated Ultrasound Characterization of the Diet-Induced Obesity (DIO) Model in Young Adult c57bl/6j Mice: Assessment of Cardiovascular, Renal and Hepatic Changes.","authors":"Sara Gargiulo, Virginia Barone, Denise Bonente, Tiziana Tamborrino, Giovanni Inzalaco, Lisa Gherardini, Eugenio Bertelli, Mario Chiariello","doi":"10.3390/jimaging10090217","DOIUrl":"https://doi.org/10.3390/jimaging10090217","url":null,"abstract":"<p><p>Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in humans consuming a \"Western-type\" (WD) diet is of great interest. This study aims to characterize this animal model, using high-frequency ultrasound imaging (HFUS) as a complementary tool to longitudinally monitor changes in the liver, heart and kidney. Long-term WD feeding increased mice body weight (BW), liver/BW ratio and body condition score (BCS), transaminases, glucose and insulin, and caused dyslipidemia and insulin resistance. Echocardiography revealed subtle cardiac remodeling in WD-fed mice, highlighting a significant age-diet interaction for some left ventricular morphofunctional parameters. Qualitative and parametric HFUS analyses of the liver in WD-fed mice showed a progressive increase in echogenicity and echotexture heterogeneity, and equal or higher brightness of the renal cortex. Furthermore, renal circulation was impaired in WD-fed female mice. The ultrasound and histopathological findings were concordant. Overall, HFUS can improve the translational value of preclinical DIO models through an integrated approach with conventional methods, enabling a comprehensive identification of early stages of diseases in vivo and non-invasively, according to the 3Rs.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding Breast Cancer: Using Radiomics to Non-Invasively Unveil Molecular Subtypes Directly from Mammographic Images. 解码乳腺癌:利用放射线组学直接从乳腺 X 射线图像非侵入性地揭示分子亚型。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-04 DOI: 10.3390/jimaging10090218
Manon A G Bakker, Maria de Lurdes Ovalho, Nuno Matela, Ana M Mota

Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from screening digital mammography (DM) images. A retrospective study was performed using the OPTIMAM Mammography Image Database (OMI-DB). Four binary classification tasks were performed: luminal A vs. non-luminal A, luminal B vs. non-luminal B, TNBC vs. non-TNBC, and HER2 vs. non-HER2. Feature selection was carried out by Pearson correlation and LASSO. The support vector machine (SVM) and naive Bayes (NB) ML classifiers were used, and their performance was evaluated with the accuracy and the area under the receiver operating characteristic curve (AUC). A total of 186 patients were included in the study: 58 luminal A, 35 luminal B, 52 TNBC, and 41 HER2. The SVM classifier resulted in AUCs during testing of 0.855 for luminal A, 0.812 for luminal B, 0.789 for TNBC, and 0.755 for HER2, respectively. The NB classifier showed AUCs during testing of 0.714 for luminal A, 0.746 for luminal B, 0.593 for TNBC, and 0.714 for HER2. The SVM classifier outperformed NB with statistical significance for luminal A (p = 0.0268) and TNBC (p = 0.0073). Our study showed the potential of radiomics for non-invasive breast cancer subtype classification.

乳腺癌是全世界最常见的癌症。治疗方法及其成功与否在很大程度上取决于肿瘤的组织学。本研究旨在探索利用从筛查数字乳腺 X 射线摄影(DM)图像中提取的放射学特征预测乳腺癌分子亚型的潜力。研究使用 OPTIMAM 乳房 X 射线摄影图像数据库(OMI-DB)进行了一项回顾性研究。进行了四种二元分类任务:管腔 A 与非管腔 A、管腔 B 与非管腔 B、TNBC 与非 TNBC 和 HER2 与非 HER2。特征选择是通过皮尔逊相关性和 LASSO 进行的。使用支持向量机(SVM)和天真贝叶斯(NB)ML分类器,并以准确率和接收者工作特征曲线下面积(AUC)评估其性能。研究共纳入了 186 名患者:其中管腔 A 型 58 例,管腔 B 型 35 例,TNBC 型 52 例,HER2 型 41 例。SVM 分类器在测试期间的 AUC 分别为:管腔 A 0.855、管腔 B 0.812、TNBC 0.789 和 HER2 0.755。NB 分类器在测试期间的 AUC 分别为管腔 A 0.714、管腔 B 0.746、TNBC 0.593 和 HER2 0.714。在管腔 A(p = 0.0268)和 TNBC(p = 0.0073)方面,SVM 分类器的统计显著性优于 NB。我们的研究显示了放射组学在无创乳腺癌亚型分类方面的潜力。
{"title":"Decoding Breast Cancer: Using Radiomics to Non-Invasively Unveil Molecular Subtypes Directly from Mammographic Images.","authors":"Manon A G Bakker, Maria de Lurdes Ovalho, Nuno Matela, Ana M Mota","doi":"10.3390/jimaging10090218","DOIUrl":"https://doi.org/10.3390/jimaging10090218","url":null,"abstract":"<p><p>Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from screening digital mammography (DM) images. A retrospective study was performed using the OPTIMAM Mammography Image Database (OMI-DB). Four binary classification tasks were performed: luminal A vs. non-luminal A, luminal B vs. non-luminal B, TNBC vs. non-TNBC, and HER2 vs. non-HER2. Feature selection was carried out by Pearson correlation and LASSO. The support vector machine (SVM) and naive Bayes (NB) ML classifiers were used, and their performance was evaluated with the accuracy and the area under the receiver operating characteristic curve (AUC). A total of 186 patients were included in the study: 58 luminal A, 35 luminal B, 52 TNBC, and 41 HER2. The SVM classifier resulted in AUCs during testing of 0.855 for luminal A, 0.812 for luminal B, 0.789 for TNBC, and 0.755 for HER2, respectively. The NB classifier showed AUCs during testing of 0.714 for luminal A, 0.746 for luminal B, 0.593 for TNBC, and 0.714 for HER2. The SVM classifier outperformed NB with statistical significance for luminal A (<i>p</i> = 0.0268) and TNBC (<i>p</i> = 0.0073). Our study showed the potential of radiomics for non-invasive breast cancer subtype classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11432960/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concrete Crack Detection and Segregation: A Feature Fusion, Crack Isolation, and Explainable AI-Based Approach. 混凝土裂缝检测与隔离:基于特征融合、裂缝隔离和可解释人工智能的方法。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-31 DOI: 10.3390/jimaging10090215
Reshma Ahmed Swarna, Muhammad Minoar Hossain, Mst Rokeya Khatun, Mohammad Motiur Rahman, Arslan Munir

Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and analysis techniques are needed for more accurate assessments. Hence, this research aims to generate an intelligent scheme that can recognize the presence of cracks and visualize the percentage of cracks from an image along with an explanation. The proposed method fuses features from concrete surface images through a ResNet-50 convolutional neural network (CNN) and curvelet transform handcrafted (HC) method, optimized by linear discriminant analysis (LDA), and the eXtreme gradient boosting (XGB) classifier then uses these features to recognize cracks. This study evaluates several CNN models, including VGG-16, VGG-19, Inception-V3, and ResNet-50, and various HC techniques, such as wavelet transform, counterlet transform, and curvelet transform for feature extraction. Principal component analysis (PCA) and LDA are assessed for feature optimization. For classification, XGB, random forest (RF), adaptive boosting (AdaBoost), and category boosting (CatBoost) are tested. To isolate and quantify the crack region, this research combines image thresholding, morphological operations, and contour detection with the convex hulls method and forms a novel algorithm. Two explainable AI (XAI) tools, local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping++ (Grad-CAM++) are integrated with the proposed method to enhance result clarity. This research introduces a novel feature fusion approach that enhances crack detection accuracy and interpretability. The method demonstrates superior performance by achieving 99.93% and 99.69% accuracy on two existing datasets, outperforming state-of-the-art methods. Additionally, the development of an algorithm for isolating and quantifying crack regions represents a significant advancement in image processing for structural analysis. The proposed approach provides a robust and reliable tool for real-time crack detection and assessment in concrete structures, facilitating timely maintenance and improving structural safety. By offering detailed explanations of the model's decisions, the research addresses the critical need for transparency in AI applications, thus increasing trust and adoption in engineering practice.

关于基于图像的裂缝检测方法的科学知识有限,无法了解其在不同裂缝尺寸、类型和环境条件下的性能。建筑商和工程师在图像分辨率、检测细微裂缝以及区分结构性和非结构性问题方面经常遇到困难。要进行更准确的评估,就需要增强算法和分析技术。因此,本研究旨在生成一种智能方案,能够识别裂缝的存在,并从图像中直观地显示裂缝的百分比以及解释。所提出的方法通过 ResNet-50 卷积神经网络(CNN)和小曲线变换手工制作(HC)方法融合了混凝土表面图像的特征,并通过线性判别分析(LDA)进行了优化,然后使用极梯度增强(XGB)分类器来识别裂缝。本研究评估了几种 CNN 模型,包括 VGG-16、VGG-19、Inception-V3 和 ResNet-50,以及用于特征提取的各种 HC 技术,如小波变换、反小波变换和小曲线变换。主成分分析 (PCA) 和 LDA 被用于特征优化。在分类方面,测试了 XGB、随机森林 (RF)、自适应增强 (AdaBoost) 和类别增强 (CatBoost)。为了分离和量化裂缝区域,本研究将图像阈值处理、形态学运算和轮廓检测与凸壳方法相结合,形成了一种新颖的算法。两种可解释的人工智能(XAI)工具--局部可解释的模型诊断解释(LIMEs)和梯度加权类激活映射++(Grad-CAM++)与所提出的方法相结合,提高了结果的清晰度。这项研究引入了一种新颖的特征融合方法,可提高裂缝检测的准确性和可解释性。该方法在现有的两个数据集上分别达到了 99.93% 和 99.69% 的准确率,表现优于最先进的方法。此外,用于隔离和量化裂缝区域的算法的开发是结构分析图像处理领域的一大进步。所提出的方法为混凝土结构的实时裂缝检测和评估提供了一种稳健可靠的工具,有助于及时维护和提高结构安全性。通过提供模型决策的详细解释,该研究满足了人工智能应用对透明度的迫切需求,从而提高了工程实践中的信任度和采用率。
{"title":"Concrete Crack Detection and Segregation: A Feature Fusion, Crack Isolation, and Explainable AI-Based Approach.","authors":"Reshma Ahmed Swarna, Muhammad Minoar Hossain, Mst Rokeya Khatun, Mohammad Motiur Rahman, Arslan Munir","doi":"10.3390/jimaging10090215","DOIUrl":"https://doi.org/10.3390/jimaging10090215","url":null,"abstract":"<p><p>Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and analysis techniques are needed for more accurate assessments. Hence, this research aims to generate an intelligent scheme that can recognize the presence of cracks and visualize the percentage of cracks from an image along with an explanation. The proposed method fuses features from concrete surface images through a ResNet-50 convolutional neural network (CNN) and curvelet transform handcrafted (HC) method, optimized by linear discriminant analysis (LDA), and the eXtreme gradient boosting (XGB) classifier then uses these features to recognize cracks. This study evaluates several CNN models, including VGG-16, VGG-19, Inception-V3, and ResNet-50, and various HC techniques, such as wavelet transform, counterlet transform, and curvelet transform for feature extraction. Principal component analysis (PCA) and LDA are assessed for feature optimization. For classification, XGB, random forest (RF), adaptive boosting (AdaBoost), and category boosting (CatBoost) are tested. To isolate and quantify the crack region, this research combines image thresholding, morphological operations, and contour detection with the convex hulls method and forms a novel algorithm. Two explainable AI (XAI) tools, local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping++ (Grad-CAM++) are integrated with the proposed method to enhance result clarity. This research introduces a novel feature fusion approach that enhances crack detection accuracy and interpretability. The method demonstrates superior performance by achieving 99.93% and 99.69% accuracy on two existing datasets, outperforming state-of-the-art methods. Additionally, the development of an algorithm for isolating and quantifying crack regions represents a significant advancement in image processing for structural analysis. The proposed approach provides a robust and reliable tool for real-time crack detection and assessment in concrete structures, facilitating timely maintenance and improving structural safety. By offering detailed explanations of the model's decisions, the research addresses the critical need for transparency in AI applications, thus increasing trust and adoption in engineering practice.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11432902/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial for the Special Issue on "Feature Papers in Section AI in Imaging". 为 "图像中的人工智能部分专题论文 "特刊撰写编辑文章。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-31 DOI: 10.3390/jimaging10090214
Antonio Fernández-Caballero

Artificial intelligence (AI) techniques are being used by the imaging academia and industry to solve a wide range of previously intractable problems [...].

成像学术界和工业界正在使用人工智能 (AI) 技术来解决各种以前难以解决的问题 [...].
{"title":"Editorial for the Special Issue on \"Feature Papers in Section AI in Imaging\".","authors":"Antonio Fernández-Caballero","doi":"10.3390/jimaging10090214","DOIUrl":"https://doi.org/10.3390/jimaging10090214","url":null,"abstract":"<p><p>Artificial intelligence (AI) techniques are being used by the imaging academia and industry to solve a wide range of previously intractable problems [...].</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FineTea: A Novel Fine-Grained Action Recognition Video Dataset for Tea Ceremony Actions. FineTea:茶道动作的新型细粒度动作识别视频数据集
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-31 DOI: 10.3390/jimaging10090216
Changwei Ouyang, Yun Yi, Hanli Wang, Jin Zhou, Tao Tian

Methods based on deep learning have achieved great success in the field of video action recognition. When these methods are applied to real-world scenarios that require fine-grained analysis of actions, such as being tested on a tea ceremony, limitations may arise. To promote the development of fine-grained action recognition, a fine-grained video action dataset is constructed by collecting videos of tea ceremony actions. This dataset includes 2745 video clips. By using a hierarchical fine-grained action classification approach, these clips are divided into 9 basic action classes and 31 fine-grained action subclasses. To better establish a fine-grained temporal model for tea ceremony actions, a method named TSM-ConvNeXt is proposed that integrates a TSM into the high-performance convolutional neural network ConvNeXt. Compared to a baseline method using ResNet50, the experimental performance of TSM-ConvNeXt is improved by 7.31%. Furthermore, compared with the state-of-the-art methods for action recognition on the FineTea and Diving48 datasets, the proposed approach achieves the best experimental results. The FineTea dataset is publicly available.

基于深度学习的方法在视频动作识别领域取得了巨大成功。当这些方法应用于需要对动作进行精细分析的真实世界场景时,例如在茶道上进行测试,可能会出现局限性。为了促进细粒度动作识别的发展,我们通过收集茶道动作视频构建了一个细粒度视频动作数据集。该数据集包括 2745 个视频片段。通过使用分层精细动作分类方法,这些视频片段被分为 9 个基本动作类和 31 个精细动作子类。为了更好地建立茶道动作的细粒度时间模型,我们提出了一种名为 TSM-ConvNeXt 的方法,将 TSM 集成到高性能卷积神经网络 ConvNeXt 中。与使用 ResNet50 的基线方法相比,TSM-ConvNeXt 的实验性能提高了 7.31%。此外,与在 FineTea 和 Diving48 数据集上进行动作识别的最先进方法相比,所提出的方法取得了最佳实验结果。FineTea 数据集是公开的。
{"title":"FineTea: A Novel Fine-Grained Action Recognition Video Dataset for Tea Ceremony Actions.","authors":"Changwei Ouyang, Yun Yi, Hanli Wang, Jin Zhou, Tao Tian","doi":"10.3390/jimaging10090216","DOIUrl":"https://doi.org/10.3390/jimaging10090216","url":null,"abstract":"<p><p>Methods based on deep learning have achieved great success in the field of video action recognition. When these methods are applied to real-world scenarios that require fine-grained analysis of actions, such as being tested on a tea ceremony, limitations may arise. To promote the development of fine-grained action recognition, a fine-grained video action dataset is constructed by collecting videos of tea ceremony actions. This dataset includes 2745 video clips. By using a hierarchical fine-grained action classification approach, these clips are divided into 9 basic action classes and 31 fine-grained action subclasses. To better establish a fine-grained temporal model for tea ceremony actions, a method named TSM-ConvNeXt is proposed that integrates a TSM into the high-performance convolutional neural network ConvNeXt. Compared to a baseline method using ResNet50, the experimental performance of TSM-ConvNeXt is improved by 7.31%. Furthermore, compared with the state-of-the-art methods for action recognition on the FineTea and Diving48 datasets, the proposed approach achieves the best experimental results. The FineTea dataset is publicly available.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433221/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal Imaging of Injured Spinal Cord Myelin and White Matter with 3D Ultrashort Echo Time Magnetization Transfer (UTE-MT) and Diffusion MRI. 利用三维超短回波时间磁化传递(UTE-MT)和弥散磁共振成像对损伤脊髓髓鞘和白质进行纵向成像。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-30 DOI: 10.3390/jimaging10090213
Qingbo Tang, Yajun Ma, Qun Cheng, Yuanshan Wu, Junyuan Chen, Jiang Du, Pengzhe Lu, Eric Y Chang

Quantitative MRI techniques could be helpful to noninvasively and longitudinally monitor dynamic changes in spinal cord white matter following injury, but imaging and postprocessing techniques in small animals remain lacking. Unilateral C5 hemisection lesions were created in a rat model, and ultrashort echo time magnetization transfer (UTE-MT) and diffusion-weighted sequences were used for imaging following injury. Magnetization transfer ratio (MTR) measurements and preferential diffusion along the longitudinal axis of the spinal cord were calculated as fractional anisotropy or an apparent diffusion coefficient ratio over transverse directions. The area of myelinated white matter was obtained by thresholding the spinal cord using mean MTR or diffusion ratio values from the contralesional side of the spinal cord. A decrease in white matter areas was observed on the ipsilesional side caudal to the lesions, which is consistent with known myelin and axonal changes following spinal cord injury. The myelinated white matter area obtained through the UTE-MT technique and the white matter area obtained through diffusion imaging techniques showed better performance to distinguish evolution after injury (AUCs > 0.94, p < 0.001) than the mean MTR (AUC = 0.74, p = 0.01) or ADC ratio (AUC = 0.68, p = 0.05) values themselves. Immunostaining for myelin basic protein (MBP) and neurofilament protein NF200 (NF200) showed atrophy and axonal degeneration, confirming the MRI results. These compositional and microstructural MRI techniques may be used to detect demyelination or remyelination in the spinal cord after spinal cord injury.

定量磁共振成像技术有助于无创和纵向监测损伤后脊髓白质的动态变化,但目前仍缺乏小动物的成像和后处理技术。研究人员在大鼠模型中创建了单侧 C5 半切灶,并使用超短回波时间磁化传递(UTE-MT)和扩散加权序列对损伤后的脊髓白质进行成像。沿脊髓纵轴的磁化传递比(MTR)测量值和优先扩散值被计算为横向的分数各向异性或表观扩散系数比。使用脊髓对侧的平均 MTR 值或扩散系数比值对脊髓进行阈值化,从而获得髓鞘化白质的面积。在病变尾部的同侧观察到白质面积减少,这与脊髓损伤后已知的髓鞘和轴突变化一致。通过UTE-MT技术获得的髓鞘白质面积和通过弥散成像技术获得的白质面积在区分损伤后的演变方面(AUC > 0.94,p < 0.001)比平均MTR(AUC = 0.74,p = 0.01)或ADC比值(AUC = 0.68,p = 0.05)本身显示出更好的性能。髓鞘碱性蛋白(MBP)和神经丝蛋白 NF200(NF200)的免疫染色显示出萎缩和轴索变性,证实了核磁共振成像的结果。这些成分和微结构磁共振成像技术可用于检测脊髓损伤后脊髓的脱髓鞘或再髓鞘化。
{"title":"Longitudinal Imaging of Injured Spinal Cord Myelin and White Matter with 3D Ultrashort Echo Time Magnetization Transfer (UTE-MT) and Diffusion MRI.","authors":"Qingbo Tang, Yajun Ma, Qun Cheng, Yuanshan Wu, Junyuan Chen, Jiang Du, Pengzhe Lu, Eric Y Chang","doi":"10.3390/jimaging10090213","DOIUrl":"https://doi.org/10.3390/jimaging10090213","url":null,"abstract":"<p><p>Quantitative MRI techniques could be helpful to noninvasively and longitudinally monitor dynamic changes in spinal cord white matter following injury, but imaging and postprocessing techniques in small animals remain lacking. Unilateral C5 hemisection lesions were created in a rat model, and ultrashort echo time magnetization transfer (UTE-MT) and diffusion-weighted sequences were used for imaging following injury. Magnetization transfer ratio (MTR) measurements and preferential diffusion along the longitudinal axis of the spinal cord were calculated as fractional anisotropy or an apparent diffusion coefficient ratio over transverse directions. The area of myelinated white matter was obtained by thresholding the spinal cord using mean MTR or diffusion ratio values from the contralesional side of the spinal cord. A decrease in white matter areas was observed on the ipsilesional side caudal to the lesions, which is consistent with known myelin and axonal changes following spinal cord injury. The myelinated white matter area obtained through the UTE-MT technique and the white matter area obtained through diffusion imaging techniques showed better performance to distinguish evolution after injury (AUCs > 0.94, <i>p</i> < 0.001) than the mean MTR (AUC = 0.74, <i>p</i> = 0.01) or ADC ratio (AUC = 0.68, <i>p</i> = 0.05) values themselves. Immunostaining for myelin basic protein (MBP) and neurofilament protein NF200 (NF200) showed atrophy and axonal degeneration, confirming the MRI results. These compositional and microstructural MRI techniques may be used to detect demyelination or remyelination in the spinal cord after spinal cord injury.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Machine Learning Model for the Classification of Enterobius vermicularis Egg. 开发用于对蠕虫卵进行分类的机器学习模型
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-28 DOI: 10.3390/jimaging10090212
Natthanai Chaibutr, Pongphan Pongpanitanont, Sakhone Laymanivong, Tongjit Thanchomnang, Penchom Janwan

Enterobius vermicularis (pinworm) infections are a significant global health issue, affecting children predominantly in environments like schools and daycares. Traditional diagnosis using the scotch tape technique involves examining E. vermicularis eggs under a microscope. This method is time-consuming and depends heavily on the examiner's expertise. To improve this, convolutional neural networks (CNNs) have been used to automate the detection of pinworm eggs from microscopic images. In our study, we enhanced E. vermicularis egg detection using a CNN benchmarked against leading models. We digitized and augmented 40,000 images of E. vermicularis eggs (class 1) and artifacts (class 0) for comprehensive training, using an 80:20 training-validation and a five-fold cross-validation. The proposed CNN model showed limited initial performance but achieved 90.0% accuracy, precision, recall, and F1-score after data augmentation. It also demonstrated improved stability with an ROC-AUC metric increase from 0.77 to 0.97. Despite its smaller file size, our CNN model performed comparably to larger models. Notably, the Xception model achieved 99.0% accuracy, precision, recall, and F1-score. These findings highlight the effectiveness of data augmentation and advanced CNN architectures in improving diagnostic accuracy and efficiency for E. vermicularis infections.

蛲虫感染是一个重要的全球性健康问题,主要影响学校和托儿所等环境中的儿童。传统的诊断方法是在显微镜下检查蛲虫卵。这种方法非常耗时,而且在很大程度上依赖于检查人员的专业知识。为了改善这种情况,卷积神经网络(CNN)已被用于从显微图像中自动检测蛲虫卵。在我们的研究中,我们使用 CNN 增强了对蛲虫卵的检测,并以领先模型作为基准。我们对 40,000 张蛲虫卵(1 类)和伪影(0 类)图像进行了数字化和增强处理,并采用 80:20 的训练验证和五倍交叉验证进行综合训练。所提出的 CNN 模型的初始性能有限,但在增加数据后,其准确度、精确度、召回率和 F1 分数均达到了 90.0%。该模型的稳定性也有所提高,ROC-AUC 指标从 0.77 提高到 0.97。尽管我们的 CNN 模型的文件尺寸较小,但其性能可与较大的模型相媲美。值得注意的是,Xception 模型的准确度、精确度、召回率和 F1 分数均达到了 99.0%。这些发现凸显了数据扩增和先进的 CNN 架构在提高对 E. vermicularis 感染的诊断准确性和效率方面的有效性。
{"title":"Development of a Machine Learning Model for the Classification of <i>Enterobius vermicularis</i> Egg.","authors":"Natthanai Chaibutr, Pongphan Pongpanitanont, Sakhone Laymanivong, Tongjit Thanchomnang, Penchom Janwan","doi":"10.3390/jimaging10090212","DOIUrl":"https://doi.org/10.3390/jimaging10090212","url":null,"abstract":"<p><p><i>Enterobius vermicularis</i> (pinworm) infections are a significant global health issue, affecting children predominantly in environments like schools and daycares. Traditional diagnosis using the scotch tape technique involves examining <i>E. vermicularis</i> eggs under a microscope. This method is time-consuming and depends heavily on the examiner's expertise. To improve this, convolutional neural networks (CNNs) have been used to automate the detection of pinworm eggs from microscopic images. In our study, we enhanced <i>E. vermicularis</i> egg detection using a CNN benchmarked against leading models. We digitized and augmented 40,000 images of <i>E. vermicularis</i> eggs (class 1) and artifacts (class 0) for comprehensive training, using an 80:20 training-validation and a five-fold cross-validation. The proposed CNN model showed limited initial performance but achieved 90.0% accuracy, precision, recall, and F1-score after data augmentation. It also demonstrated improved stability with an ROC-AUC metric increase from 0.77 to 0.97. Despite its smaller file size, our CNN model performed comparably to larger models. Notably, the Xception model achieved 99.0% accuracy, precision, recall, and F1-score. These findings highlight the effectiveness of data augmentation and advanced CNN architectures in improving diagnostic accuracy and efficiency for <i>E. vermicularis</i> infections.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Use in Mammography for Diagnosing Metachronous Contralateral Breast Cancer. 在乳房 X 射线照相术中使用人工智能诊断近端对侧乳腺癌。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-28 DOI: 10.3390/jimaging10090211
Mio Adachi, Tomoyuki Fujioka, Toshiyuki Ishiba, Miyako Nara, Sakiko Maruya, Kumiko Hayashi, Yuichi Kumaki, Emi Yamaga, Leona Katsuta, Du Hao, Mikael Hartman, Feng Mengling, Goshi Oda, Kazunori Kubota, Ukihide Tateishi

Although several studies have been conducted on artificial intelligence (AI) use in mammography (MG), there is still a paucity of research on the diagnosis of metachronous bilateral breast cancer (BC), which is typically more challenging to diagnose. This study aimed to determine whether AI could enhance BC detection, achieving earlier or more accurate diagnoses than radiologists in cases of metachronous contralateral BC. We included patients who underwent unilateral BC surgery and subsequently developed contralateral BC. This retrospective study evaluated the AI-supported MG diagnostic system called FxMammo™. We evaluated the capability of FxMammo™ (FathomX Pte Ltd., Singapore) to diagnose BC more accurately or earlier than radiologists' assessments. This evaluation was supplemented by reviewing MG readings made by radiologists. Out of 1101 patients who underwent surgery, 10 who had initially undergone a partial mastectomy and later developed contralateral BC were analyzed. The AI system identified malignancies in six cases (60%), while radiologists identified five cases (50%). Notably, two cases (20%) were diagnosed solely by the AI system. Additionally, for these cases, the AI system had identified malignancies a year before the conventional diagnosis. This study highlights the AI system's effectiveness in diagnosing metachronous contralateral BC via MG. In some cases, the AI system consistently diagnosed cancer earlier than radiological assessments.

虽然已有多项关于人工智能(AI)在乳腺 X 射线摄影(MG)中应用的研究,但关于近端双侧乳腺癌(BC)诊断的研究仍然很少,而这种癌症的诊断通常更具挑战性。本研究旨在确定人工智能是否能提高双侧乳腺癌的检测率,在对侧近端乳腺癌病例中实现比放射科医生更早或更准确的诊断。我们纳入了接受单侧 BC 手术并随后发展为对侧 BC 的患者。这项回顾性研究评估了人工智能支持的 MG 诊断系统 FxMammo™。我们评估了 FxMammo™(新加坡 FathomX 私人有限公司)比放射科医生的评估更准确或更早诊断出 BC 的能力。这项评估通过审查放射科医生的 MG 读数进行补充。在接受手术的 1101 名患者中,有 10 名患者最初接受了乳房部分切除术,后来又出现了对侧乳腺癌。人工智能系统识别出六例(60%)恶性肿瘤,而放射科医生识别出五例(50%)。值得注意的是,有两个病例(20%)仅由人工智能系统确诊。此外,在这些病例中,人工智能系统比常规诊断提前一年发现了恶性肿瘤。这项研究强调了人工智能系统通过 MG 诊断对侧晚期 BC 的有效性。在某些病例中,人工智能系统对癌症的诊断始终早于放射评估。
{"title":"AI Use in Mammography for Diagnosing Metachronous Contralateral Breast Cancer.","authors":"Mio Adachi, Tomoyuki Fujioka, Toshiyuki Ishiba, Miyako Nara, Sakiko Maruya, Kumiko Hayashi, Yuichi Kumaki, Emi Yamaga, Leona Katsuta, Du Hao, Mikael Hartman, Feng Mengling, Goshi Oda, Kazunori Kubota, Ukihide Tateishi","doi":"10.3390/jimaging10090211","DOIUrl":"https://doi.org/10.3390/jimaging10090211","url":null,"abstract":"<p><p>Although several studies have been conducted on artificial intelligence (AI) use in mammography (MG), there is still a paucity of research on the diagnosis of metachronous bilateral breast cancer (BC), which is typically more challenging to diagnose. This study aimed to determine whether AI could enhance BC detection, achieving earlier or more accurate diagnoses than radiologists in cases of metachronous contralateral BC. We included patients who underwent unilateral BC surgery and subsequently developed contralateral BC. This retrospective study evaluated the AI-supported MG diagnostic system called FxMammo™. We evaluated the capability of FxMammo™ (FathomX Pte Ltd., Singapore) to diagnose BC more accurately or earlier than radiologists' assessments. This evaluation was supplemented by reviewing MG readings made by radiologists. Out of 1101 patients who underwent surgery, 10 who had initially undergone a partial mastectomy and later developed contralateral BC were analyzed. The AI system identified malignancies in six cases (60%), while radiologists identified five cases (50%). Notably, two cases (20%) were diagnosed solely by the AI system. Additionally, for these cases, the AI system had identified malignancies a year before the conventional diagnosis. This study highlights the AI system's effectiveness in diagnosing metachronous contralateral BC via MG. In some cases, the AI system consistently diagnosed cancer earlier than radiological assessments.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11432939/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Approach for Effective Retrieval of Medical Images: A Step towards Computer-Assisted Diagnosis. 有效检索医学图像的新方法:迈向计算机辅助诊断的一步。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-26 DOI: 10.3390/jimaging10090210
Suchita Sharma, Ashutosh Aggarwal

The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the quality of patient's diagnosis. Therefore, content-based retrieval of medical images has a very prominent role in fulfilling our ultimate goal of developing automated computer-assisted diagnosis systems. Therefore, this paper presents a content-based medical image retrieval system that extracts multi-resolution, noise-resistant, rotation-invariant texture features in the form of a novel pattern descriptor, i.e., MsNrRiTxP, from medical images. In the proposed approach, the input medical image is initially decomposed into three neutrosophic images on its transformation into the neutrosophic domain. Afterwards, three distinct pattern descriptors, i.e., MsTrP, NrTxP, and RiTxP, are derived at multiple scales from the three neutrosophic images. The proposed MsNrRiTxP pattern descriptor is obtained by scale-wise concatenation of the joint histograms of MsTrP×RiTxP and NrTxP×RiTxP. To demonstrate the efficacy of the proposed system, medical images of different modalities, i.e., CT and MRI, from four test datasets are considered in our experimental setup. The retrieval performance of the proposed approach is exhaustively compared with several existing, recent, and state-of-the-art local binary pattern-based variants. The retrieval rates obtained by the proposed approach for the noise-free and noisy variants of the test datasets are observed to be substantially higher than the compared ones.

在过去十年中,生物医学成像领域取得了巨大发展。在数字化时代,对计算机辅助诊断的需求与日俱增。COVID-19 大流行进一步强调了从医学资料库中检索有意义的信息如何有助于提高病人诊断的质量。因此,基于内容的医学图像检索在实现我们开发计算机辅助自动诊断系统的最终目标方面具有非常突出的作用。因此,本文提出了一种基于内容的医学图像检索系统,该系统以一种新型模式描述符(即 MsNrRiTxP)的形式从医学图像中提取多分辨率、抗噪、旋转不变的纹理特征。在所提出的方法中,输入的医学图像在转换到中性域时首先被分解成三个中性图像。然后,从这三幅中性图像衍生出多种尺度的三种不同模式描述符,即 MsTrP、NrTxP 和 RiTxP。提出的 MsNrRiTxP 模式描述符是通过按比例连接 MsTrP×RiTxP 和 NrTxP×RiTxP 的联合直方图得到的。为了证明所提系统的有效性,我们在实验设置中考虑了来自四个测试数据集的不同模式的医学图像,即 CT 和 MRI。建议方法的检索性能与几种现有的、最新的和最先进的基于局部二进制模式的变体进行了详尽的比较。通过观察发现,在测试数据集的无噪声和有噪声变体中,所提方法获得的检索率大大高于同类方法。
{"title":"A New Approach for Effective Retrieval of Medical Images: A Step towards Computer-Assisted Diagnosis.","authors":"Suchita Sharma, Ashutosh Aggarwal","doi":"10.3390/jimaging10090210","DOIUrl":"https://doi.org/10.3390/jimaging10090210","url":null,"abstract":"<p><p>The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the quality of patient's diagnosis. Therefore, content-based retrieval of medical images has a very prominent role in fulfilling our ultimate goal of developing automated computer-assisted diagnosis systems. Therefore, this paper presents a content-based medical image retrieval system that extracts multi-resolution, noise-resistant, rotation-invariant texture features in the form of a novel pattern descriptor, i.e., MsNrRiTxP, from medical images. In the proposed approach, the input medical image is initially decomposed into three neutrosophic images on its transformation into the neutrosophic domain. Afterwards, three distinct pattern descriptors, i.e., MsTrP, NrTxP, and RiTxP, are derived at multiple scales from the three neutrosophic images. The proposed MsNrRiTxP pattern descriptor is obtained by scale-wise concatenation of the joint histograms of MsTrP×RiTxP and NrTxP×RiTxP. To demonstrate the efficacy of the proposed system, medical images of different modalities, i.e., CT and MRI, from four test datasets are considered in our experimental setup. The retrieval performance of the proposed approach is exhaustively compared with several existing, recent, and state-of-the-art local binary pattern-based variants. The retrieval rates obtained by the proposed approach for the noise-free and noisy variants of the test datasets are observed to be substantially higher than the compared ones.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ex Vivo Simultaneous H215O Positron Emission Tomography and Magnetic Resonance Imaging of Porcine Kidneys-A Feasibility Study. 猪肾脏的体内同步 H215O 正电子发射断层成像和磁共振成像--一项可行性研究。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-08-25 DOI: 10.3390/jimaging10090209
Maibritt Meldgaard Arildsen, Christian Østergaard Mariager, Christoffer Vase Overgaard, Thomas Vorre, Martin Bøjesen, Niels Moeslund, Aage Kristian Olsen Alstrup, Lars Poulsen Tolbod, Mikkel Holm Vendelbo, Steffen Ringgaard, Michael Pedersen, Niels Henrik Buus

The aim was to establish combined H215O PET/MRI during ex vivo normothermic machine perfusion (NMP) of isolated porcine kidneys. We examined whether changes in renal arterial blood flow (RABF) are accompanied by changes of a similar magnitude in renal blood perfusion (RBP) as well as the relation between RBP and renal parenchymal oxygenation (RPO).

Methods: Pig kidneys (n = 7) were connected to a NMP circuit. PET/MRI was performed at two different pump flow levels: a blood-oxygenation-level-dependent (BOLD) MRI sequence performed simultaneously with a H215O PET sequence for determination of RBP.

Results: RBP was measured using H215O PET in all kidneys (flow 1: 0.42-0.76 mL/min/g, flow 2: 0.7-1.6 mL/min/g). We found a linear correlation between changes in delivered blood flow from the perfusion pump and changes in the measured RBP using PET imaging (r2 = 0.87).

Conclusion: Our study demonstrated the feasibility of combined H215O PET/MRI during NMP of isolated porcine kidneys with tissue oxygenation being stable over time. The introduction of H215O PET/MRI in nephrological research could be highly relevant for future pre-transplant kidney evaluation and as a tool for studying renal physiology in healthy and diseased kidneys.

我们的目的是在离体猪肾的体外常温机器灌注(NMP)过程中建立 H215O PET/MRI 组合。我们研究了肾动脉血流(RABF)的变化是否伴随着肾脏血液灌注(RBP)类似程度的变化,以及 RBP 和肾实质氧合(RPO)之间的关系:方法:将猪肾脏(n = 7)连接到 NMP 电路。PET/MRI 在两种不同的泵流量水平下进行:血液氧合水平依赖性 (BOLD) MRI 序列与用于测定 RBP 的 H215O PET 序列同时进行:使用 H215O PET 测量了所有肾脏的 RBP(流量 1:0.42-0.76 毫升/分钟/克,流量 2:0.7-1.6 毫升/分钟/克)。我们发现灌注泵输送的血流量变化与 PET 成像测量的 RBP 变化之间存在线性相关(r2 = 0.87):我们的研究证明了在离体猪肾NMP期间结合H215O PET/MRI的可行性,组织氧合随时间保持稳定。在肾脏病研究中引入 H215O PET/MRI 对未来移植前肾脏评估以及健康和病变肾脏的肾脏生理研究具有重要意义。
{"title":"Ex Vivo Simultaneous H<sub>2</sub><sup>15</sup>O Positron Emission Tomography and Magnetic Resonance Imaging of Porcine Kidneys-A Feasibility Study.","authors":"Maibritt Meldgaard Arildsen, Christian Østergaard Mariager, Christoffer Vase Overgaard, Thomas Vorre, Martin Bøjesen, Niels Moeslund, Aage Kristian Olsen Alstrup, Lars Poulsen Tolbod, Mikkel Holm Vendelbo, Steffen Ringgaard, Michael Pedersen, Niels Henrik Buus","doi":"10.3390/jimaging10090209","DOIUrl":"https://doi.org/10.3390/jimaging10090209","url":null,"abstract":"<p><p>The aim was to establish combined H<sub>2</sub><sup>15</sup>O PET/MRI during ex vivo normothermic machine perfusion (NMP) of isolated porcine kidneys. We examined whether changes in renal arterial blood flow (RABF) are accompanied by changes of a similar magnitude in renal blood perfusion (RBP) as well as the relation between RBP and renal parenchymal oxygenation (RPO).</p><p><strong>Methods: </strong>Pig kidneys (n = 7) were connected to a NMP circuit. PET/MRI was performed at two different pump flow levels: a blood-oxygenation-level-dependent (BOLD) MRI sequence performed simultaneously with a H<sub>2</sub><sup>15</sup>O PET sequence for determination of RBP.</p><p><strong>Results: </strong>RBP was measured using H<sub>2</sub><sup>15</sup>O PET in all kidneys (flow 1: 0.42-0.76 mL/min/g, flow 2: 0.7-1.6 mL/min/g). We found a linear correlation between changes in delivered blood flow from the perfusion pump and changes in the measured RBP using PET imaging (r<sup>2</sup> = 0.87).</p><p><strong>Conclusion: </strong>Our study demonstrated the feasibility of combined H<sub>2</sub><sup>15</sup>O PET/MRI during NMP of isolated porcine kidneys with tissue oxygenation being stable over time. The introduction of H215O PET/MRI in nephrological research could be highly relevant for future pre-transplant kidney evaluation and as a tool for studying renal physiology in healthy and diseased kidneys.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433579/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1