Pub Date : 2025-05-05eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf034
Tuan D Pham
This study introduces an approach to classifying histopathological images for detecting dysplasia in oral cancer through the fusion of support vector machine (SVM) classifiers trained on deep learning features extracted from InceptionResNet-v2 and vision transformer (ViT) models. The classification of dysplasia, a critical indicator of oral cancer progression, is often complicated by class imbalance, with a higher prevalence of dysplastic lesions compared to non-dysplastic cases. This research addresses this challenge by leveraging the complementary strengths of the two models. The InceptionResNet-v2 model, paired with an SVM classifier, excels in identifying the presence of dysplasia, capturing fine-grained morphological features indicative of the condition. In contrast, the ViT-based SVM demonstrates superior performance in detecting the absence of dysplasia, effectively capturing global contextual information from the images. A fusion strategy was employed to combine these classifiers through class selection: the majority class (presence of dysplasia) was predicted using the InceptionResNet-v2-SVM, while the minority class (absence of dysplasia) was predicted using the ViT-SVM. The fusion approach significantly outperformed individual models and other state-of-the-art methods, achieving superior balanced accuracy, sensitivity, precision, and area under the curve. This demonstrates its ability to handle class imbalance effectively while maintaining high diagnostic accuracy. The results highlight the potential of integrating deep learning feature extraction with SVM classifiers to improve classification performance in complex medical imaging tasks. This study underscores the value of combining complementary classification strategies to address the challenges of class imbalance and improve diagnostic workflows.
{"title":"Integrating support vector machines and deep learning features for oral cancer histopathology analysis.","authors":"Tuan D Pham","doi":"10.1093/biomethods/bpaf034","DOIUrl":"10.1093/biomethods/bpaf034","url":null,"abstract":"<p><p>This study introduces an approach to classifying histopathological images for detecting dysplasia in oral cancer through the fusion of support vector machine (SVM) classifiers trained on deep learning features extracted from InceptionResNet-v2 and vision transformer (ViT) models. The classification of dysplasia, a critical indicator of oral cancer progression, is often complicated by class imbalance, with a higher prevalence of dysplastic lesions compared to non-dysplastic cases. This research addresses this challenge by leveraging the complementary strengths of the two models. The InceptionResNet-v2 model, paired with an SVM classifier, excels in identifying the presence of dysplasia, capturing fine-grained morphological features indicative of the condition. In contrast, the ViT-based SVM demonstrates superior performance in detecting the absence of dysplasia, effectively capturing global contextual information from the images. A fusion strategy was employed to combine these classifiers through class selection: the majority class (presence of dysplasia) was predicted using the InceptionResNet-v2-SVM, while the minority class (absence of dysplasia) was predicted using the ViT-SVM. The fusion approach significantly outperformed individual models and other state-of-the-art methods, achieving superior balanced accuracy, sensitivity, precision, and area under the curve. This demonstrates its ability to handle class imbalance effectively while maintaining high diagnostic accuracy. The results highlight the potential of integrating deep learning feature extraction with SVM classifiers to improve classification performance in complex medical imaging tasks. This study underscores the value of combining complementary classification strategies to address the challenges of class imbalance and improve diagnostic workflows.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf034"},"PeriodicalIF":2.5,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12122209/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144200332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-28eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf033
Manuel González Lastre, Pablo González De Prado Salas, Raúl Guantes
Cancer treatments often lose effectiveness as tumors develop resistance to single-agent therapies. Combination treatments can overcome this limitation, but the overwhelming combinatorial space of drug-dose interactions makes exhaustive experimental testing impractical. Data-driven methods, such as machine and deep learning, have emerged as promising tools to predict synergistic drug combinations. In this work, we systematically investigate the use of categorical embeddings within Deep Neural Networks to enhance drug synergy predictions. These learned and transferable encodings capture similarities between the elements of each category, demonstrating particular utility in scarce data scenarios.
{"title":"Optimizing drug synergy prediction through categorical embeddings in deep neural networks.","authors":"Manuel González Lastre, Pablo González De Prado Salas, Raúl Guantes","doi":"10.1093/biomethods/bpaf033","DOIUrl":"10.1093/biomethods/bpaf033","url":null,"abstract":"<p><p>Cancer treatments often lose effectiveness as tumors develop resistance to single-agent therapies. Combination treatments can overcome this limitation, but the overwhelming combinatorial space of drug-dose interactions makes exhaustive experimental testing impractical. Data-driven methods, such as machine and deep learning, have emerged as promising tools to predict synergistic drug combinations. In this work, we systematically investigate the use of categorical embeddings within Deep Neural Networks to enhance drug synergy predictions. These learned and transferable encodings capture similarities between the elements of each category, demonstrating particular utility in scarce data scenarios.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf033"},"PeriodicalIF":2.5,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119136/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144174902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-26eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf032
Pegah Khosravi, Shady Saikali, Abolfazl Alipour, Saber Mohammadi, Maxwell Boger, Dalanda M Diallo, Christopher J Smith, Marcio C Moschovas, Iman Hajirasouliha, Andrew J Hung, Srirama S Venkataraman, Vipul Patel
Preoperative identification of extracapsular extension (ECE) in prostate cancer (PCa) is crucial for effective treatment planning, as ECE presence significantly increases the risk of positive surgical margins and early biochemical recurrence following radical prostatectomy. AutoRadAI, an innovative artificial intelligence (AI) framework, was developed to address this clinical challenge while demonstrating broader potential for diverse medical imaging applications. The framework integrates T2-weighted MRI data with histopathology annotations, leveraging a dual convolutional neural network (multi-CNN) architecture. AutoRadAI comprises two key components: ProSliceFinder, which isolates prostate-relevant MRI slices, and ExCapNet, which evaluates ECE likelihood at the patient level. The system was trained and validated on a dataset of 1001 patients (510 ECE-positive, 491 ECE-negative cases). ProSliceFinder achieved an area under the ROC curve (AUC) of 0.92 (95% confidence interval [CI]: 0.89-0.94) for slice classification, while ExCapNet demonstrated robust performance with an AUC of 0.88 (95% CI: 0.83-0.92) for patient-level ECE detection. Additionally, AutoRadAI's modular design ensures scalability and adaptability for applications beyond ECE detection. Validated through a user-friendly web-based interface for seamless clinical integration, AutoRadAI highlights the potential of AI-driven solutions in precision oncology. This framework improves diagnostic accuracy and streamlines preoperative staging, offering transformative applications in PCa management and beyond.
{"title":"AutoRadAI: a versatile artificial intelligence framework validated for detecting extracapsular extension in prostate cancer.","authors":"Pegah Khosravi, Shady Saikali, Abolfazl Alipour, Saber Mohammadi, Maxwell Boger, Dalanda M Diallo, Christopher J Smith, Marcio C Moschovas, Iman Hajirasouliha, Andrew J Hung, Srirama S Venkataraman, Vipul Patel","doi":"10.1093/biomethods/bpaf032","DOIUrl":"10.1093/biomethods/bpaf032","url":null,"abstract":"<p><p>Preoperative identification of extracapsular extension (ECE) in prostate cancer (PCa) is crucial for effective treatment planning, as ECE presence significantly increases the risk of positive surgical margins and early biochemical recurrence following radical prostatectomy. AutoRadAI, an innovative artificial intelligence (AI) framework, was developed to address this clinical challenge while demonstrating broader potential for diverse medical imaging applications. The framework integrates T2-weighted MRI data with histopathology annotations, leveraging a dual convolutional neural network (multi-CNN) architecture. AutoRadAI comprises two key components: ProSliceFinder, which isolates prostate-relevant MRI slices, and ExCapNet, which evaluates ECE likelihood at the patient level. The system was trained and validated on a dataset of 1001 patients (510 ECE-positive, 491 ECE-negative cases). ProSliceFinder achieved an area under the ROC curve (AUC) of 0.92 (95% confidence interval [CI]: 0.89-0.94) for slice classification, while ExCapNet demonstrated robust performance with an AUC of 0.88 (95% CI: 0.83-0.92) for patient-level ECE detection. Additionally, AutoRadAI's modular design ensures scalability and adaptability for applications beyond ECE detection. Validated through a user-friendly web-based interface for seamless clinical integration, AutoRadAI highlights the potential of AI-driven solutions in precision oncology. This framework improves diagnostic accuracy and streamlines preoperative staging, offering transformative applications in PCa management and beyond.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf032"},"PeriodicalIF":2.5,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119131/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144174516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf031
Zhen Zhou, Ripon Sarkar, Jose Emiliano Esparza Pinelo, Alexis Richard, Jay Dunn, Zhao Ren, Callie S Kwartler, Dianna M Milewicz
Thoracic aortic aneurysm and dissection (TAD) is a life-threatening vascular disorder, and smooth muscle cell mitochondrial dysfunction leads to cell death, contributing to TAD. Accurate measurements of metabolic processes are essential for understanding cellular homeostasis in both healthy and diseased states. While assays for evaluating mitochondrial respiration have been well established for cultured cells and isolated mitochondria, no optimized application has been developed for aortic tissue. In this study, we generate an optimized protocol using the Agilent Seahorse XFe24 analyzer to measure mitochondrial respiration in mouse aortic tissue. This method allows for precise measurement of mitochondrial oxygen consumption in mouse aorta, providing a reliable assay for bioenergetic analysis of aortic tissue. The protocol offers a reproducible approach for assessing mitochondrial function in aortic tissues, capturing both baseline OCR and responses to mitochondrial inhibitors, such as oligomycin, FCCP, and rotenone/antimycin A. This method establishes a critical foundation for studying metabolic shifts in aortic tissues and offers valuable insights into the cellular mechanisms of aortic diseases, contributing to a better understanding of TAD progression.
{"title":"Measurement of oxygen consumption rate in mouse aortic tissue.","authors":"Zhen Zhou, Ripon Sarkar, Jose Emiliano Esparza Pinelo, Alexis Richard, Jay Dunn, Zhao Ren, Callie S Kwartler, Dianna M Milewicz","doi":"10.1093/biomethods/bpaf031","DOIUrl":"10.1093/biomethods/bpaf031","url":null,"abstract":"<p><p>Thoracic aortic aneurysm and dissection (TAD) is a life-threatening vascular disorder, and smooth muscle cell mitochondrial dysfunction leads to cell death, contributing to TAD. Accurate measurements of metabolic processes are essential for understanding cellular homeostasis in both healthy and diseased states. While assays for evaluating mitochondrial respiration have been well established for cultured cells and isolated mitochondria, no optimized application has been developed for aortic tissue. In this study, we generate an optimized protocol using the Agilent Seahorse XFe24 analyzer to measure mitochondrial respiration in mouse aortic tissue. This method allows for precise measurement of mitochondrial oxygen consumption in mouse aorta, providing a reliable assay for bioenergetic analysis of aortic tissue. The protocol offers a reproducible approach for assessing mitochondrial function in aortic tissues, capturing both baseline OCR and responses to mitochondrial inhibitors, such as oligomycin, FCCP, and rotenone/antimycin A. This method establishes a critical foundation for studying metabolic shifts in aortic tissues and offers valuable insights into the cellular mechanisms of aortic diseases, contributing to a better understanding of TAD progression.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf031"},"PeriodicalIF":2.5,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12054972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf028
Murali Aadhitya Magateshvaren Saras, Mithun K Mitra, Sonika Tyagi
The application of machine learning (ML) techniques in predictive modelling has greatly advanced our comprehension of biological systems. There is a notable shift in the trend towards integration methods that specifically target the simultaneous analysis of multiple modes or types of data, showcasing superior results compared to individual analyses. Despite the availability of diverse ML architectures for researchers interested in embracing a multimodal approach, the current literature lacks a comprehensive taxonomy that includes the pros and cons of these methods to guide the entire process. Closing this gap is imperative, necessitating the creation of a robust framework. This framework should not only categorize the diverse ML architectures suitable for multimodal analysis but also offer insights into their respective advantages and limitations. Additionally, such a framework can serve as a valuable guide for selecting an appropriate workflow for multimodal analysis. This comprehensive taxonomy would provide a clear guidance and support informed decision-making within the progressively intricate landscape of biomedical and clinical data analysis. This is an essential step towards advancing personalized medicine. The aims of the work are to comprehensively study and describe the harmonization processes that are performed and reported in the literature and present a working guide that would enable planning and selecting an appropriate integrative model. We present harmonization as a dual process of representation and integration, each with multiple methods and categories. The taxonomy of the various representation and integration methods are classified into six broad categories and detailed with the advantages, disadvantages and examples. A guide flowchart describing the step-by-step processes that are needed to adopt a multimodal approach is also presented along with examples and references. This review provides a thorough taxonomy of methods for harmonizing multimodal data and introduces a foundational 10-step guide for newcomers to implement a multimodal workflow.
{"title":"Navigating the Multiverse: a Hitchhiker's guide to selecting harmonization methods for multimodal biomedical data.","authors":"Murali Aadhitya Magateshvaren Saras, Mithun K Mitra, Sonika Tyagi","doi":"10.1093/biomethods/bpaf028","DOIUrl":"https://doi.org/10.1093/biomethods/bpaf028","url":null,"abstract":"<p><p>The application of machine learning (ML) techniques in predictive modelling has greatly advanced our comprehension of biological systems. There is a notable shift in the trend towards integration methods that specifically target the simultaneous analysis of multiple modes or types of data, showcasing superior results compared to individual analyses. Despite the availability of diverse ML architectures for researchers interested in embracing a multimodal approach, the current literature lacks a comprehensive taxonomy that includes the pros and cons of these methods to guide the entire process. Closing this gap is imperative, necessitating the creation of a robust framework. This framework should not only categorize the diverse ML architectures suitable for multimodal analysis but also offer insights into their respective advantages and limitations. Additionally, such a framework can serve as a valuable guide for selecting an appropriate workflow for multimodal analysis. This comprehensive taxonomy would provide a clear guidance and support informed decision-making within the progressively intricate landscape of biomedical and clinical data analysis. This is an essential step towards advancing personalized medicine. The aims of the work are to comprehensively study and describe the harmonization processes that are performed and reported in the literature and present a working guide that would enable planning and selecting an appropriate integrative model. We present harmonization as a dual process of representation and integration, each with multiple methods and categories. The taxonomy of the various representation and integration methods are classified into six broad categories and detailed with the advantages, disadvantages and examples. A guide flowchart describing the step-by-step processes that are needed to adopt a multimodal approach is also presented along with examples and references. This review provides a thorough taxonomy of methods for harmonizing multimodal data and introduces a foundational 10-step guide for newcomers to implement a multimodal workflow.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf028"},"PeriodicalIF":2.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12043205/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143988258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-12eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf029
Katherine R Seymour, Jessica P Rickard, Kelsey R Pool, Taylor Pini, Simon P de Graaf
Training to improve the standardization of subjective assessments in biological science is crucial to improve and maintain accuracy. However, in reproductive science there is no standardized training tool available to assess sperm morphology. Sperm morphology is routinely assessed subjectively across several species and is often used as grounds to reject or retain samples for sale or insemination. As with all subjective tests, sperm morphology assessment is liable to human bias and without appropriate standardization these assessments are unreliable. This proof-of-concept study aimed to develop a standardized sperm morphology assessment training tool that can train and test students on a sperm-by-sperm basis. The following manuscript outlines the methods used to develop a training tool with the capability to account for different microscope optics, morphological classification systems, and species of spermatozoa assessed. The generation of images, their classification, organization, and integration into a web interface, along with its design and outputs, are described. Briefly, images of spermatozoa were generated by taking field of view (FOV) images at 40× magnification on DIC optics, amounting to a total of 3,600 FOV images from 72 rams (50 FOV/ram). These FOV images were cropped to only show one sperm per image using a novel machine-learning algorithm. The resulting 9,365 images were labelled by three experienced assessors, and those with 100% consensus on all labels (4821/9365) were integrated into a web interface able to provide both (i) instant feedback to users on correct/incorrect labels for training purposes, and (ii) an assessment of user proficiency. Future studies will test the effectiveness of the training tool to educate students on the application of a variety of morphology classification systems. If proven effective, it will be the first standardized method to train individuals in sperm morphology assessment and help to improve understanding of how training should be conducted.
{"title":"Development of a sperm morphology assessment standardization training tool.","authors":"Katherine R Seymour, Jessica P Rickard, Kelsey R Pool, Taylor Pini, Simon P de Graaf","doi":"10.1093/biomethods/bpaf029","DOIUrl":"https://doi.org/10.1093/biomethods/bpaf029","url":null,"abstract":"<p><p>Training to improve the standardization of subjective assessments in biological science is crucial to improve and maintain accuracy. However, in reproductive science there is no standardized training tool available to assess sperm morphology. Sperm morphology is routinely assessed subjectively across several species and is often used as grounds to reject or retain samples for sale or insemination. As with all subjective tests, sperm morphology assessment is liable to human bias and without appropriate standardization these assessments are unreliable. This proof-of-concept study aimed to develop a standardized sperm morphology assessment training tool that can train and test students on a sperm-by-sperm basis. The following manuscript outlines the methods used to develop a training tool with the capability to account for different microscope optics, morphological classification systems, and species of spermatozoa assessed. The generation of images, their classification, organization, and integration into a web interface, along with its design and outputs, are described. Briefly, images of spermatozoa were generated by taking field of view (FOV) images at 40× magnification on DIC optics, amounting to a total of 3,600 FOV images from 72 rams (50 FOV/ram). These FOV images were cropped to only show one sperm per image using a novel machine-learning algorithm. The resulting 9,365 images were labelled by three experienced assessors, and those with 100% consensus on all labels (4821/9365) were integrated into a web interface able to provide both (i) instant feedback to users on correct/incorrect labels for training purposes, and (ii) an assessment of user proficiency. Future studies will test the effectiveness of the training tool to educate students on the application of a variety of morphology classification systems. If proven effective, it will be the first standardized method to train individuals in sperm morphology assessment and help to improve understanding of how training should be conducted.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf029"},"PeriodicalIF":2.5,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12036963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144053557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf030
Ajay K Mali, Sivasubramanian Murugappan, Jayashree Rajesh Prasad, Syed A M Tofail, Nanasaheb D Thorat
Three-dimensional (3D) spheroid models have advanced cancer research by better mimicking the tumour microenvironment compared to traditional two-dimensional cell cultures. However, challenges persist in high-throughput analysis of morphological characteristics and cell viability, as traditional methods like manual fluorescence analysis are labour-intensive and inconsistent. Existing AI-based approaches often address segmentation or classification in isolation, lacking an integrated workflow. We propose a scalable, two-stage deep learning pipeline to address these gaps: (i) a U-Net model for precise detection and segmentation of 3D spheroids from microscopic images, achieving 95% prediction accuracy, and (ii) a CNN Regression Hybrid method for estimating live/dead cell percentages and classifying spheroids, with an value of 98%. This end-to-end pipeline automates cell viability quantification and generates key morphological parameters for spheroid growth kinetics. By integrating segmentation and analysis, our method addresses environmental variability and morphological characterization challenges, offering a robust tool for drug discovery, toxicity screening, and clinical research. This approach significantly improves efficiency and scalability of 3D spheroid evaluations, paving the way for advancements in cancer therapeutics.
{"title":"A deep learning pipeline for morphological and viability assessment of 3D cancer cell spheroids.","authors":"Ajay K Mali, Sivasubramanian Murugappan, Jayashree Rajesh Prasad, Syed A M Tofail, Nanasaheb D Thorat","doi":"10.1093/biomethods/bpaf030","DOIUrl":"https://doi.org/10.1093/biomethods/bpaf030","url":null,"abstract":"<p><p>Three-dimensional (3D) spheroid models have advanced cancer research by better mimicking the tumour microenvironment compared to traditional <b>two-</b>dimensional cell cultures. However, challenges persist in high-throughput analysis of morphological characteristics and cell viability, as traditional methods like manual fluorescence analysis are labour-intensive and inconsistent. Existing AI-based approaches often address segmentation or classification in isolation, lacking an integrated workflow. We propose a scalable, two-stage deep learning pipeline to address these gaps: (i) a U-Net model for precise detection and segmentation of 3D spheroids from microscopic images, achieving 95% prediction accuracy, and (ii) a CNN Regression Hybrid method for estimating live/dead cell percentages and classifying spheroids, with an <math> <mrow> <msup><mrow><mi>R</mi></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> value of 98%. This end-to-end pipeline automates cell viability quantification and generates key morphological parameters for spheroid growth kinetics. By integrating segmentation and analysis, our method addresses environmental variability and morphological characterization challenges, offering a robust tool for drug discovery, toxicity screening, and clinical research. This approach significantly improves efficiency and scalability of 3D spheroid evaluations, paving the way for advancements in cancer therapeutics.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf030"},"PeriodicalIF":2.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12064216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-09eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf027
Avinash Agarwal, Filipe de Jesus Colwell, Viviana Andrea Correa Galvis, Tom R Hill, Neil Boonham, Ankush Prashar
Estimating pigment content of leafy vegetables via digital image analysis is a reliable method for high-throughput assessment of their nutritional value. However, the current leaf color analysis models developed using green-leaved plants fail to perform reliably while analyzing images of anthocyanin (Anth)-rich red-leaved varieties due to misleading or "red herring" trends. Hence, the present study explores the potential for machine learning (ML)-based estimation of nutritional pigment content for green and red leafy vegetables simultaneously using digital color features. For this, images of n =320 samples from six types of leafy vegetables with varying pigment profiles were acquired using a smartphone camera, followed by extract-based estimation of chlorophyll (Chl), carotenoid (Car), and Anth. Subsequently, three ML methods, namely, Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Random Forest Regression (RFR), were tested for predicting pigment contents using RGB (Red, Green, Blue), HSV (Hue, Saturation, Value), and L*a*b* (Lightness, Redness-greenness, Yellowness-blueness) datasets individually and in combination. Chl and Car contents were predicted most accurately using the combined colorimetric dataset via SVR (R2 = 0.738) and RFR (R2 = 0.573), respectively. Conversely, Anth content was predicted most accurately using SVR with HSV data (R2 = 0.818). While Chl and Car could be predicted reliably for green-leaved and Anth-rich samples, Anth could be estimated accurately only for Anth-rich samples due to Anth masking by Chl in green-leaved samples. Thus, the present findings demonstrate the scope of implementing ML-based leaf color analysis for assessing the nutritional pigment content of red and green leafy vegetables in tandem.
{"title":"Assessing nutritional pigment content of green and red leafy vegetables by image analysis: Catching the \"red herring\" of plant digital color processing via machine learning.","authors":"Avinash Agarwal, Filipe de Jesus Colwell, Viviana Andrea Correa Galvis, Tom R Hill, Neil Boonham, Ankush Prashar","doi":"10.1093/biomethods/bpaf027","DOIUrl":"https://doi.org/10.1093/biomethods/bpaf027","url":null,"abstract":"<p><p>Estimating pigment content of leafy vegetables via digital image analysis is a reliable method for high-throughput assessment of their nutritional value. However, the current leaf color analysis models developed using green-leaved plants fail to perform reliably while analyzing images of anthocyanin (Anth)-rich red-leaved varieties due to misleading or \"red herring\" trends. Hence, the present study explores the potential for machine learning (ML)-based estimation of nutritional pigment content for green and red leafy vegetables simultaneously using digital color features. For this, images of <i>n </i>=<i> </i>320 samples from six types of leafy vegetables with varying pigment profiles were acquired using a smartphone camera, followed by extract-based estimation of chlorophyll (Chl), carotenoid (Car), and Anth. Subsequently, three ML methods, namely, Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Random Forest Regression (RFR), were tested for predicting pigment contents using RGB (Red, Green, Blue), HSV (Hue, Saturation, Value), and <i>L*a*b*</i> (Lightness, Redness-greenness, Yellowness-blueness) datasets individually and in combination. Chl and Car contents were predicted most accurately using the combined colorimetric dataset via SVR (<i>R<sup>2</sup></i> = 0.738) and RFR (<i>R<sup>2</sup></i> = 0.573), respectively. Conversely, Anth content was predicted most accurately using SVR with HSV data (<i>R<sup>2</sup></i> = 0.818). While Chl and Car could be predicted reliably for green-leaved and Anth-rich samples, Anth could be estimated accurately only for Anth-rich samples due to Anth masking by Chl in green-leaved samples. Thus, the present findings demonstrate the scope of implementing ML-based leaf color analysis for assessing the nutritional pigment content of red and green leafy vegetables in tandem.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf027"},"PeriodicalIF":2.5,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12057810/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-03eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf026
Poonam Kanwar, Stan Altmeisch, Petra Bauer
The rhizosphere, the region surrounding plant roots, plays a critical role in nutrient acquisition, root development, and plant-soil interactions. Spatial variations in rhizosphere pH along the root axis are shaped by environmental cues, nutrient availability, microbial activity, and root growth patterns. Precise detection and quantification of these pH changes are essential for understanding plant plasticity and nutrient efficiency. Here, we present a refined methodology integrating pH indicator bromocresol purple with a rapid, non-destructive electrode-based system to visualize and quantify pH variations along the root axis, enabling high-resolution and scalable monitoring of root-induced pH changes in the rhizosphere. Using this approach, we investigated the impact of iron (Fe) availability on rhizosphere pH dynamics in wild-type (WT) and bHLH39-overexpressing (39Ox) seedlings. bHLH39, a key basic helix-loop-helix transcription factor in Fe uptake, enhances Fe acquisition when overexpressed, often leading to Fe toxicity and reduced root growth under Fe-sufficient conditions. However, its role in root-mediated acidification remains unclear. Our findings reveal that 39Ox plants exhibit enhanced rhizosphere acidification, whereas WT roots display zone-specific pH responses depending on Fe availability. To refine pH measurements, we developed two complementary electrode-based methodologies: localized rhizosphere pH change for region-specific assessment and integrated rhizosphere pH change for net root system variation. These techniques improve resolution, accuracy, and efficiency in large-scale experiments, providing robust tools for investigating natural and genetic variations in rhizosphere pH regulation and their role in nutrient mobilization and ecological adaptation.
{"title":"Quantitative tools for analyzing rhizosphere pH dynamics: localized and integrated approaches.","authors":"Poonam Kanwar, Stan Altmeisch, Petra Bauer","doi":"10.1093/biomethods/bpaf026","DOIUrl":"https://doi.org/10.1093/biomethods/bpaf026","url":null,"abstract":"<p><p>The rhizosphere, the region surrounding plant roots, plays a critical role in nutrient acquisition, root development, and plant-soil interactions. Spatial variations in rhizosphere pH along the root axis are shaped by environmental cues, nutrient availability, microbial activity, and root growth patterns. Precise detection and quantification of these pH changes are essential for understanding plant plasticity and nutrient efficiency. Here, we present a refined methodology integrating pH indicator bromocresol purple with a rapid, non-destructive electrode-based system to visualize and quantify pH variations along the root axis, enabling high-resolution and scalable monitoring of root-induced pH changes in the rhizosphere. Using this approach, we investigated the impact of iron (Fe) availability on rhizosphere pH dynamics in wild-type (WT) and bHLH39-overexpressing (39Ox) seedlings. bHLH39, a key basic helix-loop-helix transcription factor in Fe uptake, enhances Fe acquisition when overexpressed, often leading to Fe toxicity and reduced root growth under Fe-sufficient conditions. However, its role in root-mediated acidification remains unclear. Our findings reveal that 39Ox plants exhibit enhanced rhizosphere acidification, whereas WT roots display zone-specific pH responses depending on Fe availability. To refine pH measurements, we developed two complementary electrode-based methodologies: localized rhizosphere pH change for region-specific assessment and integrated rhizosphere pH change for net root system variation. These techniques improve resolution, accuracy, and efficiency in large-scale experiments, providing robust tools for investigating natural and genetic variations in rhizosphere pH regulation and their role in nutrient mobilization and ecological adaptation.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf026"},"PeriodicalIF":2.5,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12036966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-28eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf024
Aksel Laudon, Zhaoze Wang, Anqi Zou, Richa Sharma, Jiayi Ji, Winston Tan, Connor Kim, Yingzhe Qian, Qin Ye, Hui Chen, Joel M Henderson, Chao Zhang, Vijaya B Kolachalama, Weining Lu
Transmission electron microscopy (TEM) images can visualize kidney glomerular filtration barrier ultrastructure, including the glomerular basement membrane (GBM) and podocyte foot processes (PFP). Podocytopathy is associated with glomerular filtration barrier morphological changes observed experimentally and clinically by measuring GBM or PFP width. However, these measurements are currently performed manually. This limits research on podocytopathy disease mechanisms and therapeutics due to labor intensiveness and inter-operator variability. We developed a deep learning-based digital pathology computational method to measure GBM and PFP width in TEM images from the kidneys of Integrin-Linked Kinase (ILK) podocyte-specific conditional knockout (cKO) mouse, an animal model of podocytopathy, compared to wild-type (WT) control mouse. We obtained TEM images from WT and ILK cKO littermate mice at 4 weeks old. Our automated method was composed of two stages: a U-Net model for GBM segmentation, followed by an image processing algorithm for GBM and PFP width measurement. We evaluated its performance with a 4-fold cross-validation study on WT and ILK cKO mouse kidney pairs. Mean [95% confidence interval (CI)] GBM segmentation accuracy, calculated as Jaccard index, was 0.73 (0.70-0.76) for WT and 0.85 (0.83-0.87) for ILK cKO TEM images. Automated and manual GBM width measurements were similar for both WT (P = .49) and ILK cKO (P = .06) specimens. While automated and manual PFP width measurements were similar for WT (P = .89), they differed for ILK cKO (P < .05) specimens. WT and ILK cKO specimens were morphologically distinguishable by manual GBM (P < .05) and PFP (P < .05) width measurements. This phenotypic difference was reflected in the automated GBM (P < .05) more than PFP (P = .06) widths. Our deep learning-based digital pathology tool automated measurements in a mouse model of podocytopathy. This proposed method provides high-throughput, objective morphological analysis and could facilitate podocytopathy research.
透射电子显微镜(TEM)图像可以显示肾小球滤过屏障的超微结构,包括肾小球基底膜(GBM)和足细胞足突(PFP)。足细胞病与肾小球滤过屏障形态学改变有关,通过实验和临床测量GBM或PFP宽度。然而,这些测量目前是手动执行的。由于劳动强度和操作者之间的差异,这限制了足细胞病疾病机制和治疗方法的研究。我们开发了一种基于深度学习的数字病理学计算方法,用于测量整合素连接激酶(ILK)足细胞特异性条件敲除(cKO)小鼠肾脏TEM图像中的GBM和PFP宽度,这是一种足细胞病变动物模型,与野生型(WT)对照小鼠进行比较。我们在4周龄时获得了WT和ILK cKO同窝小鼠的TEM图像。我们的自动化方法由两个阶段组成:用于GBM分割的U-Net模型,然后是用于GBM和PFP宽度测量的图像处理算法。我们通过对WT和ILK cKO小鼠肾对的4倍交叉验证研究来评估其性能。平均[95%置信区间(CI)] GBM分割精度,以Jaccard指数计算,WT为0.73 (0.70-0.76),ILK cKO TEM图像为0.85(0.83-0.87)。对于WT (P = .49)和ILK cKO (P = .06)标本,自动和手动GBM宽度测量相似。虽然自动和手动PFP宽度测量在WT上相似(P = .89),但在ILK cKO上不同(P P P P P P = .06)。我们基于深度学习的数字病理工具在小鼠足细胞病模型中自动测量。该方法提供了高通量、客观的形态学分析,有助于足细胞病的研究。
{"title":"Digital pathology assessment of kidney glomerular filtration barrier ultrastructure in an animal model of podocytopathy.","authors":"Aksel Laudon, Zhaoze Wang, Anqi Zou, Richa Sharma, Jiayi Ji, Winston Tan, Connor Kim, Yingzhe Qian, Qin Ye, Hui Chen, Joel M Henderson, Chao Zhang, Vijaya B Kolachalama, Weining Lu","doi":"10.1093/biomethods/bpaf024","DOIUrl":"10.1093/biomethods/bpaf024","url":null,"abstract":"<p><p>Transmission electron microscopy (TEM) images can visualize kidney glomerular filtration barrier ultrastructure, including the glomerular basement membrane (GBM) and podocyte foot processes (PFP). Podocytopathy is associated with glomerular filtration barrier morphological changes observed experimentally and clinically by measuring GBM or PFP width. However, these measurements are currently performed manually. This limits research on podocytopathy disease mechanisms and therapeutics due to labor intensiveness and inter-operator variability. We developed a deep learning-based digital pathology computational method to measure GBM and PFP width in TEM images from the kidneys of Integrin-Linked Kinase (ILK) podocyte-specific conditional knockout (cKO) mouse, an animal model of podocytopathy, compared to wild-type (WT) control mouse. We obtained TEM images from WT and ILK cKO littermate mice at 4 weeks old. Our automated method was composed of two stages: a U-Net model for GBM segmentation, followed by an image processing algorithm for GBM and PFP width measurement. We evaluated its performance with a 4-fold cross-validation study on WT and ILK cKO mouse kidney pairs. Mean [95% confidence interval (CI)] GBM segmentation accuracy, calculated as Jaccard index, was 0.73 (0.70-0.76) for WT and 0.85 (0.83-0.87) for ILK cKO TEM images. Automated and manual GBM width measurements were similar for both WT (<i>P</i> = .49) and ILK cKO (<i>P</i> = .06) specimens. While automated and manual PFP width measurements were similar for WT (<i>P</i> = .89), they differed for ILK cKO (<i>P</i> < .05) specimens. WT and ILK cKO specimens were morphologically distinguishable by manual GBM (<i>P</i> < .05) and PFP (<i>P</i> < .05) width measurements. This phenotypic difference was reflected in the automated GBM (<i>P</i> < .05) more than PFP (<i>P</i> = .06) widths. Our deep learning-based digital pathology tool automated measurements in a mouse model of podocytopathy. This proposed method provides high-throughput, objective morphological analysis and could facilitate podocytopathy research.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf024"},"PeriodicalIF":2.5,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11992336/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143986524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}