Pub Date : 2024-09-04DOI: 10.3390/jimaging10090217
Sara Gargiulo, Virginia Barone, Denise Bonente, Tiziana Tamborrino, Giovanni Inzalaco, Lisa Gherardini, Eugenio Bertelli, Mario Chiariello
Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in humans consuming a "Western-type" (WD) diet is of great interest. This study aims to characterize this animal model, using high-frequency ultrasound imaging (HFUS) as a complementary tool to longitudinally monitor changes in the liver, heart and kidney. Long-term WD feeding increased mice body weight (BW), liver/BW ratio and body condition score (BCS), transaminases, glucose and insulin, and caused dyslipidemia and insulin resistance. Echocardiography revealed subtle cardiac remodeling in WD-fed mice, highlighting a significant age-diet interaction for some left ventricular morphofunctional parameters. Qualitative and parametric HFUS analyses of the liver in WD-fed mice showed a progressive increase in echogenicity and echotexture heterogeneity, and equal or higher brightness of the renal cortex. Furthermore, renal circulation was impaired in WD-fed female mice. The ultrasound and histopathological findings were concordant. Overall, HFUS can improve the translational value of preclinical DIO models through an integrated approach with conventional methods, enabling a comprehensive identification of early stages of diseases in vivo and non-invasively, according to the 3Rs.
{"title":"Integrated Ultrasound Characterization of the Diet-Induced Obesity (DIO) Model in Young Adult c57bl/6j Mice: Assessment of Cardiovascular, Renal and Hepatic Changes.","authors":"Sara Gargiulo, Virginia Barone, Denise Bonente, Tiziana Tamborrino, Giovanni Inzalaco, Lisa Gherardini, Eugenio Bertelli, Mario Chiariello","doi":"10.3390/jimaging10090217","DOIUrl":"https://doi.org/10.3390/jimaging10090217","url":null,"abstract":"<p><p>Consuming an unbalanced diet and being overweight represent a global health problem in young people and adults of both sexes, and may lead to metabolic syndrome. The diet-induced obesity (DIO) model in the C57BL/6J mouse substrain that mimics the gradual weight gain in humans consuming a \"Western-type\" (WD) diet is of great interest. This study aims to characterize this animal model, using high-frequency ultrasound imaging (HFUS) as a complementary tool to longitudinally monitor changes in the liver, heart and kidney. Long-term WD feeding increased mice body weight (BW), liver/BW ratio and body condition score (BCS), transaminases, glucose and insulin, and caused dyslipidemia and insulin resistance. Echocardiography revealed subtle cardiac remodeling in WD-fed mice, highlighting a significant age-diet interaction for some left ventricular morphofunctional parameters. Qualitative and parametric HFUS analyses of the liver in WD-fed mice showed a progressive increase in echogenicity and echotexture heterogeneity, and equal or higher brightness of the renal cortex. Furthermore, renal circulation was impaired in WD-fed female mice. The ultrasound and histopathological findings were concordant. Overall, HFUS can improve the translational value of preclinical DIO models through an integrated approach with conventional methods, enabling a comprehensive identification of early stages of diseases in vivo and non-invasively, according to the 3Rs.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.3390/jimaging10090218
Manon A G Bakker, Maria de Lurdes Ovalho, Nuno Matela, Ana M Mota
Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from screening digital mammography (DM) images. A retrospective study was performed using the OPTIMAM Mammography Image Database (OMI-DB). Four binary classification tasks were performed: luminal A vs. non-luminal A, luminal B vs. non-luminal B, TNBC vs. non-TNBC, and HER2 vs. non-HER2. Feature selection was carried out by Pearson correlation and LASSO. The support vector machine (SVM) and naive Bayes (NB) ML classifiers were used, and their performance was evaluated with the accuracy and the area under the receiver operating characteristic curve (AUC). A total of 186 patients were included in the study: 58 luminal A, 35 luminal B, 52 TNBC, and 41 HER2. The SVM classifier resulted in AUCs during testing of 0.855 for luminal A, 0.812 for luminal B, 0.789 for TNBC, and 0.755 for HER2, respectively. The NB classifier showed AUCs during testing of 0.714 for luminal A, 0.746 for luminal B, 0.593 for TNBC, and 0.714 for HER2. The SVM classifier outperformed NB with statistical significance for luminal A (p = 0.0268) and TNBC (p = 0.0073). Our study showed the potential of radiomics for non-invasive breast cancer subtype classification.
乳腺癌是全世界最常见的癌症。治疗方法及其成功与否在很大程度上取决于肿瘤的组织学。本研究旨在探索利用从筛查数字乳腺 X 射线摄影(DM)图像中提取的放射学特征预测乳腺癌分子亚型的潜力。研究使用 OPTIMAM 乳房 X 射线摄影图像数据库(OMI-DB)进行了一项回顾性研究。进行了四种二元分类任务:管腔 A 与非管腔 A、管腔 B 与非管腔 B、TNBC 与非 TNBC 和 HER2 与非 HER2。特征选择是通过皮尔逊相关性和 LASSO 进行的。使用支持向量机(SVM)和天真贝叶斯(NB)ML分类器,并以准确率和接收者工作特征曲线下面积(AUC)评估其性能。研究共纳入了 186 名患者:其中管腔 A 型 58 例,管腔 B 型 35 例,TNBC 型 52 例,HER2 型 41 例。SVM 分类器在测试期间的 AUC 分别为:管腔 A 0.855、管腔 B 0.812、TNBC 0.789 和 HER2 0.755。NB 分类器在测试期间的 AUC 分别为管腔 A 0.714、管腔 B 0.746、TNBC 0.593 和 HER2 0.714。在管腔 A(p = 0.0268)和 TNBC(p = 0.0073)方面,SVM 分类器的统计显著性优于 NB。我们的研究显示了放射组学在无创乳腺癌亚型分类方面的潜力。
{"title":"Decoding Breast Cancer: Using Radiomics to Non-Invasively Unveil Molecular Subtypes Directly from Mammographic Images.","authors":"Manon A G Bakker, Maria de Lurdes Ovalho, Nuno Matela, Ana M Mota","doi":"10.3390/jimaging10090218","DOIUrl":"https://doi.org/10.3390/jimaging10090218","url":null,"abstract":"<p><p>Breast cancer is the most commonly diagnosed cancer worldwide. The therapy used and its success depend highly on the histology of the tumor. This study aimed to explore the potential of predicting the molecular subtype of breast cancer using radiomic features extracted from screening digital mammography (DM) images. A retrospective study was performed using the OPTIMAM Mammography Image Database (OMI-DB). Four binary classification tasks were performed: luminal A vs. non-luminal A, luminal B vs. non-luminal B, TNBC vs. non-TNBC, and HER2 vs. non-HER2. Feature selection was carried out by Pearson correlation and LASSO. The support vector machine (SVM) and naive Bayes (NB) ML classifiers were used, and their performance was evaluated with the accuracy and the area under the receiver operating characteristic curve (AUC). A total of 186 patients were included in the study: 58 luminal A, 35 luminal B, 52 TNBC, and 41 HER2. The SVM classifier resulted in AUCs during testing of 0.855 for luminal A, 0.812 for luminal B, 0.789 for TNBC, and 0.755 for HER2, respectively. The NB classifier showed AUCs during testing of 0.714 for luminal A, 0.746 for luminal B, 0.593 for TNBC, and 0.714 for HER2. The SVM classifier outperformed NB with statistical significance for luminal A (<i>p</i> = 0.0268) and TNBC (<i>p</i> = 0.0073). Our study showed the potential of radiomics for non-invasive breast cancer subtype classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11432960/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.3390/jimaging10090215
Reshma Ahmed Swarna, Muhammad Minoar Hossain, Mst Rokeya Khatun, Mohammad Motiur Rahman, Arslan Munir
Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and analysis techniques are needed for more accurate assessments. Hence, this research aims to generate an intelligent scheme that can recognize the presence of cracks and visualize the percentage of cracks from an image along with an explanation. The proposed method fuses features from concrete surface images through a ResNet-50 convolutional neural network (CNN) and curvelet transform handcrafted (HC) method, optimized by linear discriminant analysis (LDA), and the eXtreme gradient boosting (XGB) classifier then uses these features to recognize cracks. This study evaluates several CNN models, including VGG-16, VGG-19, Inception-V3, and ResNet-50, and various HC techniques, such as wavelet transform, counterlet transform, and curvelet transform for feature extraction. Principal component analysis (PCA) and LDA are assessed for feature optimization. For classification, XGB, random forest (RF), adaptive boosting (AdaBoost), and category boosting (CatBoost) are tested. To isolate and quantify the crack region, this research combines image thresholding, morphological operations, and contour detection with the convex hulls method and forms a novel algorithm. Two explainable AI (XAI) tools, local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping++ (Grad-CAM++) are integrated with the proposed method to enhance result clarity. This research introduces a novel feature fusion approach that enhances crack detection accuracy and interpretability. The method demonstrates superior performance by achieving 99.93% and 99.69% accuracy on two existing datasets, outperforming state-of-the-art methods. Additionally, the development of an algorithm for isolating and quantifying crack regions represents a significant advancement in image processing for structural analysis. The proposed approach provides a robust and reliable tool for real-time crack detection and assessment in concrete structures, facilitating timely maintenance and improving structural safety. By offering detailed explanations of the model's decisions, the research addresses the critical need for transparency in AI applications, thus increasing trust and adoption in engineering practice.
{"title":"Concrete Crack Detection and Segregation: A Feature Fusion, Crack Isolation, and Explainable AI-Based Approach.","authors":"Reshma Ahmed Swarna, Muhammad Minoar Hossain, Mst Rokeya Khatun, Mohammad Motiur Rahman, Arslan Munir","doi":"10.3390/jimaging10090215","DOIUrl":"https://doi.org/10.3390/jimaging10090215","url":null,"abstract":"<p><p>Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and analysis techniques are needed for more accurate assessments. Hence, this research aims to generate an intelligent scheme that can recognize the presence of cracks and visualize the percentage of cracks from an image along with an explanation. The proposed method fuses features from concrete surface images through a ResNet-50 convolutional neural network (CNN) and curvelet transform handcrafted (HC) method, optimized by linear discriminant analysis (LDA), and the eXtreme gradient boosting (XGB) classifier then uses these features to recognize cracks. This study evaluates several CNN models, including VGG-16, VGG-19, Inception-V3, and ResNet-50, and various HC techniques, such as wavelet transform, counterlet transform, and curvelet transform for feature extraction. Principal component analysis (PCA) and LDA are assessed for feature optimization. For classification, XGB, random forest (RF), adaptive boosting (AdaBoost), and category boosting (CatBoost) are tested. To isolate and quantify the crack region, this research combines image thresholding, morphological operations, and contour detection with the convex hulls method and forms a novel algorithm. Two explainable AI (XAI) tools, local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping++ (Grad-CAM++) are integrated with the proposed method to enhance result clarity. This research introduces a novel feature fusion approach that enhances crack detection accuracy and interpretability. The method demonstrates superior performance by achieving 99.93% and 99.69% accuracy on two existing datasets, outperforming state-of-the-art methods. Additionally, the development of an algorithm for isolating and quantifying crack regions represents a significant advancement in image processing for structural analysis. The proposed approach provides a robust and reliable tool for real-time crack detection and assessment in concrete structures, facilitating timely maintenance and improving structural safety. By offering detailed explanations of the model's decisions, the research addresses the critical need for transparency in AI applications, thus increasing trust and adoption in engineering practice.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11432902/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.3390/jimaging10090214
Antonio Fernández-Caballero
Artificial intelligence (AI) techniques are being used by the imaging academia and industry to solve a wide range of previously intractable problems [...].
成像学术界和工业界正在使用人工智能 (AI) 技术来解决各种以前难以解决的问题 [...].
{"title":"Editorial for the Special Issue on \"Feature Papers in Section AI in Imaging\".","authors":"Antonio Fernández-Caballero","doi":"10.3390/jimaging10090214","DOIUrl":"https://doi.org/10.3390/jimaging10090214","url":null,"abstract":"<p><p>Artificial intelligence (AI) techniques are being used by the imaging academia and industry to solve a wide range of previously intractable problems [...].</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.3390/jimaging10090216
Changwei Ouyang, Yun Yi, Hanli Wang, Jin Zhou, Tao Tian
Methods based on deep learning have achieved great success in the field of video action recognition. When these methods are applied to real-world scenarios that require fine-grained analysis of actions, such as being tested on a tea ceremony, limitations may arise. To promote the development of fine-grained action recognition, a fine-grained video action dataset is constructed by collecting videos of tea ceremony actions. This dataset includes 2745 video clips. By using a hierarchical fine-grained action classification approach, these clips are divided into 9 basic action classes and 31 fine-grained action subclasses. To better establish a fine-grained temporal model for tea ceremony actions, a method named TSM-ConvNeXt is proposed that integrates a TSM into the high-performance convolutional neural network ConvNeXt. Compared to a baseline method using ResNet50, the experimental performance of TSM-ConvNeXt is improved by 7.31%. Furthermore, compared with the state-of-the-art methods for action recognition on the FineTea and Diving48 datasets, the proposed approach achieves the best experimental results. The FineTea dataset is publicly available.
{"title":"FineTea: A Novel Fine-Grained Action Recognition Video Dataset for Tea Ceremony Actions.","authors":"Changwei Ouyang, Yun Yi, Hanli Wang, Jin Zhou, Tao Tian","doi":"10.3390/jimaging10090216","DOIUrl":"https://doi.org/10.3390/jimaging10090216","url":null,"abstract":"<p><p>Methods based on deep learning have achieved great success in the field of video action recognition. When these methods are applied to real-world scenarios that require fine-grained analysis of actions, such as being tested on a tea ceremony, limitations may arise. To promote the development of fine-grained action recognition, a fine-grained video action dataset is constructed by collecting videos of tea ceremony actions. This dataset includes 2745 video clips. By using a hierarchical fine-grained action classification approach, these clips are divided into 9 basic action classes and 31 fine-grained action subclasses. To better establish a fine-grained temporal model for tea ceremony actions, a method named TSM-ConvNeXt is proposed that integrates a TSM into the high-performance convolutional neural network ConvNeXt. Compared to a baseline method using ResNet50, the experimental performance of TSM-ConvNeXt is improved by 7.31%. Furthermore, compared with the state-of-the-art methods for action recognition on the FineTea and Diving48 datasets, the proposed approach achieves the best experimental results. The FineTea dataset is publicly available.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433221/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.3390/jimaging10090213
Qingbo Tang, Yajun Ma, Qun Cheng, Yuanshan Wu, Junyuan Chen, Jiang Du, Pengzhe Lu, Eric Y Chang
Quantitative MRI techniques could be helpful to noninvasively and longitudinally monitor dynamic changes in spinal cord white matter following injury, but imaging and postprocessing techniques in small animals remain lacking. Unilateral C5 hemisection lesions were created in a rat model, and ultrashort echo time magnetization transfer (UTE-MT) and diffusion-weighted sequences were used for imaging following injury. Magnetization transfer ratio (MTR) measurements and preferential diffusion along the longitudinal axis of the spinal cord were calculated as fractional anisotropy or an apparent diffusion coefficient ratio over transverse directions. The area of myelinated white matter was obtained by thresholding the spinal cord using mean MTR or diffusion ratio values from the contralesional side of the spinal cord. A decrease in white matter areas was observed on the ipsilesional side caudal to the lesions, which is consistent with known myelin and axonal changes following spinal cord injury. The myelinated white matter area obtained through the UTE-MT technique and the white matter area obtained through diffusion imaging techniques showed better performance to distinguish evolution after injury (AUCs > 0.94, p < 0.001) than the mean MTR (AUC = 0.74, p = 0.01) or ADC ratio (AUC = 0.68, p = 0.05) values themselves. Immunostaining for myelin basic protein (MBP) and neurofilament protein NF200 (NF200) showed atrophy and axonal degeneration, confirming the MRI results. These compositional and microstructural MRI techniques may be used to detect demyelination or remyelination in the spinal cord after spinal cord injury.
{"title":"Longitudinal Imaging of Injured Spinal Cord Myelin and White Matter with 3D Ultrashort Echo Time Magnetization Transfer (UTE-MT) and Diffusion MRI.","authors":"Qingbo Tang, Yajun Ma, Qun Cheng, Yuanshan Wu, Junyuan Chen, Jiang Du, Pengzhe Lu, Eric Y Chang","doi":"10.3390/jimaging10090213","DOIUrl":"https://doi.org/10.3390/jimaging10090213","url":null,"abstract":"<p><p>Quantitative MRI techniques could be helpful to noninvasively and longitudinally monitor dynamic changes in spinal cord white matter following injury, but imaging and postprocessing techniques in small animals remain lacking. Unilateral C5 hemisection lesions were created in a rat model, and ultrashort echo time magnetization transfer (UTE-MT) and diffusion-weighted sequences were used for imaging following injury. Magnetization transfer ratio (MTR) measurements and preferential diffusion along the longitudinal axis of the spinal cord were calculated as fractional anisotropy or an apparent diffusion coefficient ratio over transverse directions. The area of myelinated white matter was obtained by thresholding the spinal cord using mean MTR or diffusion ratio values from the contralesional side of the spinal cord. A decrease in white matter areas was observed on the ipsilesional side caudal to the lesions, which is consistent with known myelin and axonal changes following spinal cord injury. The myelinated white matter area obtained through the UTE-MT technique and the white matter area obtained through diffusion imaging techniques showed better performance to distinguish evolution after injury (AUCs > 0.94, <i>p</i> < 0.001) than the mean MTR (AUC = 0.74, <i>p</i> = 0.01) or ADC ratio (AUC = 0.68, <i>p</i> = 0.05) values themselves. Immunostaining for myelin basic protein (MBP) and neurofilament protein NF200 (NF200) showed atrophy and axonal degeneration, confirming the MRI results. These compositional and microstructural MRI techniques may be used to detect demyelination or remyelination in the spinal cord after spinal cord injury.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enterobius vermicularis (pinworm) infections are a significant global health issue, affecting children predominantly in environments like schools and daycares. Traditional diagnosis using the scotch tape technique involves examining E. vermicularis eggs under a microscope. This method is time-consuming and depends heavily on the examiner's expertise. To improve this, convolutional neural networks (CNNs) have been used to automate the detection of pinworm eggs from microscopic images. In our study, we enhanced E. vermicularis egg detection using a CNN benchmarked against leading models. We digitized and augmented 40,000 images of E. vermicularis eggs (class 1) and artifacts (class 0) for comprehensive training, using an 80:20 training-validation and a five-fold cross-validation. The proposed CNN model showed limited initial performance but achieved 90.0% accuracy, precision, recall, and F1-score after data augmentation. It also demonstrated improved stability with an ROC-AUC metric increase from 0.77 to 0.97. Despite its smaller file size, our CNN model performed comparably to larger models. Notably, the Xception model achieved 99.0% accuracy, precision, recall, and F1-score. These findings highlight the effectiveness of data augmentation and advanced CNN architectures in improving diagnostic accuracy and efficiency for E. vermicularis infections.
蛲虫感染是一个重要的全球性健康问题,主要影响学校和托儿所等环境中的儿童。传统的诊断方法是在显微镜下检查蛲虫卵。这种方法非常耗时,而且在很大程度上依赖于检查人员的专业知识。为了改善这种情况,卷积神经网络(CNN)已被用于从显微图像中自动检测蛲虫卵。在我们的研究中,我们使用 CNN 增强了对蛲虫卵的检测,并以领先模型作为基准。我们对 40,000 张蛲虫卵(1 类)和伪影(0 类)图像进行了数字化和增强处理,并采用 80:20 的训练验证和五倍交叉验证进行综合训练。所提出的 CNN 模型的初始性能有限,但在增加数据后,其准确度、精确度、召回率和 F1 分数均达到了 90.0%。该模型的稳定性也有所提高,ROC-AUC 指标从 0.77 提高到 0.97。尽管我们的 CNN 模型的文件尺寸较小,但其性能可与较大的模型相媲美。值得注意的是,Xception 模型的准确度、精确度、召回率和 F1 分数均达到了 99.0%。这些发现凸显了数据扩增和先进的 CNN 架构在提高对 E. vermicularis 感染的诊断准确性和效率方面的有效性。
{"title":"Development of a Machine Learning Model for the Classification of <i>Enterobius vermicularis</i> Egg.","authors":"Natthanai Chaibutr, Pongphan Pongpanitanont, Sakhone Laymanivong, Tongjit Thanchomnang, Penchom Janwan","doi":"10.3390/jimaging10090212","DOIUrl":"https://doi.org/10.3390/jimaging10090212","url":null,"abstract":"<p><p><i>Enterobius vermicularis</i> (pinworm) infections are a significant global health issue, affecting children predominantly in environments like schools and daycares. Traditional diagnosis using the scotch tape technique involves examining <i>E. vermicularis</i> eggs under a microscope. This method is time-consuming and depends heavily on the examiner's expertise. To improve this, convolutional neural networks (CNNs) have been used to automate the detection of pinworm eggs from microscopic images. In our study, we enhanced <i>E. vermicularis</i> egg detection using a CNN benchmarked against leading models. We digitized and augmented 40,000 images of <i>E. vermicularis</i> eggs (class 1) and artifacts (class 0) for comprehensive training, using an 80:20 training-validation and a five-fold cross-validation. The proposed CNN model showed limited initial performance but achieved 90.0% accuracy, precision, recall, and F1-score after data augmentation. It also demonstrated improved stability with an ROC-AUC metric increase from 0.77 to 0.97. Despite its smaller file size, our CNN model performed comparably to larger models. Notably, the Xception model achieved 99.0% accuracy, precision, recall, and F1-score. These findings highlight the effectiveness of data augmentation and advanced CNN architectures in improving diagnostic accuracy and efficiency for <i>E. vermicularis</i> infections.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although several studies have been conducted on artificial intelligence (AI) use in mammography (MG), there is still a paucity of research on the diagnosis of metachronous bilateral breast cancer (BC), which is typically more challenging to diagnose. This study aimed to determine whether AI could enhance BC detection, achieving earlier or more accurate diagnoses than radiologists in cases of metachronous contralateral BC. We included patients who underwent unilateral BC surgery and subsequently developed contralateral BC. This retrospective study evaluated the AI-supported MG diagnostic system called FxMammo™. We evaluated the capability of FxMammo™ (FathomX Pte Ltd., Singapore) to diagnose BC more accurately or earlier than radiologists' assessments. This evaluation was supplemented by reviewing MG readings made by radiologists. Out of 1101 patients who underwent surgery, 10 who had initially undergone a partial mastectomy and later developed contralateral BC were analyzed. The AI system identified malignancies in six cases (60%), while radiologists identified five cases (50%). Notably, two cases (20%) were diagnosed solely by the AI system. Additionally, for these cases, the AI system had identified malignancies a year before the conventional diagnosis. This study highlights the AI system's effectiveness in diagnosing metachronous contralateral BC via MG. In some cases, the AI system consistently diagnosed cancer earlier than radiological assessments.
虽然已有多项关于人工智能(AI)在乳腺 X 射线摄影(MG)中应用的研究,但关于近端双侧乳腺癌(BC)诊断的研究仍然很少,而这种癌症的诊断通常更具挑战性。本研究旨在确定人工智能是否能提高双侧乳腺癌的检测率,在对侧近端乳腺癌病例中实现比放射科医生更早或更准确的诊断。我们纳入了接受单侧 BC 手术并随后发展为对侧 BC 的患者。这项回顾性研究评估了人工智能支持的 MG 诊断系统 FxMammo™。我们评估了 FxMammo™(新加坡 FathomX 私人有限公司)比放射科医生的评估更准确或更早诊断出 BC 的能力。这项评估通过审查放射科医生的 MG 读数进行补充。在接受手术的 1101 名患者中,有 10 名患者最初接受了乳房部分切除术,后来又出现了对侧乳腺癌。人工智能系统识别出六例(60%)恶性肿瘤,而放射科医生识别出五例(50%)。值得注意的是,有两个病例(20%)仅由人工智能系统确诊。此外,在这些病例中,人工智能系统比常规诊断提前一年发现了恶性肿瘤。这项研究强调了人工智能系统通过 MG 诊断对侧晚期 BC 的有效性。在某些病例中,人工智能系统对癌症的诊断始终早于放射评估。
{"title":"AI Use in Mammography for Diagnosing Metachronous Contralateral Breast Cancer.","authors":"Mio Adachi, Tomoyuki Fujioka, Toshiyuki Ishiba, Miyako Nara, Sakiko Maruya, Kumiko Hayashi, Yuichi Kumaki, Emi Yamaga, Leona Katsuta, Du Hao, Mikael Hartman, Feng Mengling, Goshi Oda, Kazunori Kubota, Ukihide Tateishi","doi":"10.3390/jimaging10090211","DOIUrl":"https://doi.org/10.3390/jimaging10090211","url":null,"abstract":"<p><p>Although several studies have been conducted on artificial intelligence (AI) use in mammography (MG), there is still a paucity of research on the diagnosis of metachronous bilateral breast cancer (BC), which is typically more challenging to diagnose. This study aimed to determine whether AI could enhance BC detection, achieving earlier or more accurate diagnoses than radiologists in cases of metachronous contralateral BC. We included patients who underwent unilateral BC surgery and subsequently developed contralateral BC. This retrospective study evaluated the AI-supported MG diagnostic system called FxMammo™. We evaluated the capability of FxMammo™ (FathomX Pte Ltd., Singapore) to diagnose BC more accurately or earlier than radiologists' assessments. This evaluation was supplemented by reviewing MG readings made by radiologists. Out of 1101 patients who underwent surgery, 10 who had initially undergone a partial mastectomy and later developed contralateral BC were analyzed. The AI system identified malignancies in six cases (60%), while radiologists identified five cases (50%). Notably, two cases (20%) were diagnosed solely by the AI system. Additionally, for these cases, the AI system had identified malignancies a year before the conventional diagnosis. This study highlights the AI system's effectiveness in diagnosing metachronous contralateral BC via MG. In some cases, the AI system consistently diagnosed cancer earlier than radiological assessments.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11432939/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.3390/jimaging10090210
Suchita Sharma, Ashutosh Aggarwal
The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the quality of patient's diagnosis. Therefore, content-based retrieval of medical images has a very prominent role in fulfilling our ultimate goal of developing automated computer-assisted diagnosis systems. Therefore, this paper presents a content-based medical image retrieval system that extracts multi-resolution, noise-resistant, rotation-invariant texture features in the form of a novel pattern descriptor, i.e., MsNrRiTxP, from medical images. In the proposed approach, the input medical image is initially decomposed into three neutrosophic images on its transformation into the neutrosophic domain. Afterwards, three distinct pattern descriptors, i.e., MsTrP, NrTxP, and RiTxP, are derived at multiple scales from the three neutrosophic images. The proposed MsNrRiTxP pattern descriptor is obtained by scale-wise concatenation of the joint histograms of MsTrP×RiTxP and NrTxP×RiTxP. To demonstrate the efficacy of the proposed system, medical images of different modalities, i.e., CT and MRI, from four test datasets are considered in our experimental setup. The retrieval performance of the proposed approach is exhaustively compared with several existing, recent, and state-of-the-art local binary pattern-based variants. The retrieval rates obtained by the proposed approach for the noise-free and noisy variants of the test datasets are observed to be substantially higher than the compared ones.
{"title":"A New Approach for Effective Retrieval of Medical Images: A Step towards Computer-Assisted Diagnosis.","authors":"Suchita Sharma, Ashutosh Aggarwal","doi":"10.3390/jimaging10090210","DOIUrl":"https://doi.org/10.3390/jimaging10090210","url":null,"abstract":"<p><p>The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the quality of patient's diagnosis. Therefore, content-based retrieval of medical images has a very prominent role in fulfilling our ultimate goal of developing automated computer-assisted diagnosis systems. Therefore, this paper presents a content-based medical image retrieval system that extracts multi-resolution, noise-resistant, rotation-invariant texture features in the form of a novel pattern descriptor, i.e., MsNrRiTxP, from medical images. In the proposed approach, the input medical image is initially decomposed into three neutrosophic images on its transformation into the neutrosophic domain. Afterwards, three distinct pattern descriptors, i.e., MsTrP, NrTxP, and RiTxP, are derived at multiple scales from the three neutrosophic images. The proposed MsNrRiTxP pattern descriptor is obtained by scale-wise concatenation of the joint histograms of MsTrP×RiTxP and NrTxP×RiTxP. To demonstrate the efficacy of the proposed system, medical images of different modalities, i.e., CT and MRI, from four test datasets are considered in our experimental setup. The retrieval performance of the proposed approach is exhaustively compared with several existing, recent, and state-of-the-art local binary pattern-based variants. The retrieval rates obtained by the proposed approach for the noise-free and noisy variants of the test datasets are observed to be substantially higher than the compared ones.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-25DOI: 10.3390/jimaging10090209
Maibritt Meldgaard Arildsen, Christian Østergaard Mariager, Christoffer Vase Overgaard, Thomas Vorre, Martin Bøjesen, Niels Moeslund, Aage Kristian Olsen Alstrup, Lars Poulsen Tolbod, Mikkel Holm Vendelbo, Steffen Ringgaard, Michael Pedersen, Niels Henrik Buus
The aim was to establish combined H215O PET/MRI during ex vivo normothermic machine perfusion (NMP) of isolated porcine kidneys. We examined whether changes in renal arterial blood flow (RABF) are accompanied by changes of a similar magnitude in renal blood perfusion (RBP) as well as the relation between RBP and renal parenchymal oxygenation (RPO).
Methods: Pig kidneys (n = 7) were connected to a NMP circuit. PET/MRI was performed at two different pump flow levels: a blood-oxygenation-level-dependent (BOLD) MRI sequence performed simultaneously with a H215O PET sequence for determination of RBP.
Results: RBP was measured using H215O PET in all kidneys (flow 1: 0.42-0.76 mL/min/g, flow 2: 0.7-1.6 mL/min/g). We found a linear correlation between changes in delivered blood flow from the perfusion pump and changes in the measured RBP using PET imaging (r2 = 0.87).
Conclusion: Our study demonstrated the feasibility of combined H215O PET/MRI during NMP of isolated porcine kidneys with tissue oxygenation being stable over time. The introduction of H215O PET/MRI in nephrological research could be highly relevant for future pre-transplant kidney evaluation and as a tool for studying renal physiology in healthy and diseased kidneys.
{"title":"Ex Vivo Simultaneous H<sub>2</sub><sup>15</sup>O Positron Emission Tomography and Magnetic Resonance Imaging of Porcine Kidneys-A Feasibility Study.","authors":"Maibritt Meldgaard Arildsen, Christian Østergaard Mariager, Christoffer Vase Overgaard, Thomas Vorre, Martin Bøjesen, Niels Moeslund, Aage Kristian Olsen Alstrup, Lars Poulsen Tolbod, Mikkel Holm Vendelbo, Steffen Ringgaard, Michael Pedersen, Niels Henrik Buus","doi":"10.3390/jimaging10090209","DOIUrl":"https://doi.org/10.3390/jimaging10090209","url":null,"abstract":"<p><p>The aim was to establish combined H<sub>2</sub><sup>15</sup>O PET/MRI during ex vivo normothermic machine perfusion (NMP) of isolated porcine kidneys. We examined whether changes in renal arterial blood flow (RABF) are accompanied by changes of a similar magnitude in renal blood perfusion (RBP) as well as the relation between RBP and renal parenchymal oxygenation (RPO).</p><p><strong>Methods: </strong>Pig kidneys (n = 7) were connected to a NMP circuit. PET/MRI was performed at two different pump flow levels: a blood-oxygenation-level-dependent (BOLD) MRI sequence performed simultaneously with a H<sub>2</sub><sup>15</sup>O PET sequence for determination of RBP.</p><p><strong>Results: </strong>RBP was measured using H<sub>2</sub><sup>15</sup>O PET in all kidneys (flow 1: 0.42-0.76 mL/min/g, flow 2: 0.7-1.6 mL/min/g). We found a linear correlation between changes in delivered blood flow from the perfusion pump and changes in the measured RBP using PET imaging (r<sup>2</sup> = 0.87).</p><p><strong>Conclusion: </strong>Our study demonstrated the feasibility of combined H<sub>2</sub><sup>15</sup>O PET/MRI during NMP of isolated porcine kidneys with tissue oxygenation being stable over time. The introduction of H215O PET/MRI in nephrological research could be highly relevant for future pre-transplant kidney evaluation and as a tool for studying renal physiology in healthy and diseased kidneys.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433579/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}