Pub Date : 2025-10-11eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf073
Nadine E Smith, Janni Petersen, Michael Z Michael
The long noncoding RNA NEAT1 is transcribed from a single exon gene and produces two isoforms through alternative 3'-end processing. The short polyadenylated NEAT1_1 drives proliferation in many malignancies through increasing glycolytic flux and the Warburg effect. The longer NEAT1_2 lacks a poly(A)-tail but is an essential scaffold for nuclear paraspeckles, nuclear condensates that reportedly play a tumour protective role. Due to the two isoforms sharing identical 5'-ends, many previous studies have quantified NEAT1_1 by subtracting NEAT1_2 from total NEAT1 levels. However, this only estimates the abundance of NEAT1_1. Standard oligo(dT)-primed RT-PCR is not suitable for quantifying NEAT1_1 as the longer NEAT1_2 sequence contains twelve poly(A) repeats, so unintended priming overestimates NEAT1_1 abundance. Here, we report the development of a novel RT-PCR method allowing relative quantification of NEAT1_1 independently of NEAT1_2. Using an anchored oligo(dT) primer for reverse transcription enriches cDNA with the NEAT1_1 isoform, and the use of a longer primer anchoring sequence at the PCR stage enhances detection specificity. Our method is validated by the successful independent quantification of NEAT1_1 following a forced isoform switch using antisense oligomers in both cancer and non-cancer cell lines. Additionally, we have visualized this isoform switch in colorectal cancer cell lines using fluorescent in situ hybridization techniques specific to NEAT1_2-containing paraspeckles.
{"title":"<i>NEAT</i> and tidy: a novel RT-PCR method for the independent quantification of the <i>NEAT1_1</i> isoform.","authors":"Nadine E Smith, Janni Petersen, Michael Z Michael","doi":"10.1093/biomethods/bpaf073","DOIUrl":"10.1093/biomethods/bpaf073","url":null,"abstract":"<p><p>The long noncoding RNA <i>NEAT1</i> is transcribed from a single exon gene and produces two isoforms through alternative 3'-end processing. The short polyadenylated <i>NEAT1_1</i> drives proliferation in many malignancies through increasing glycolytic flux and the Warburg effect. The longer <i>NEAT1_2</i> lacks a poly(A)-tail but is an essential scaffold for nuclear paraspeckles, nuclear condensates that reportedly play a tumour protective role. Due to the two isoforms sharing identical 5'-ends, many previous studies have quantified <i>NEAT1_1</i> by subtracting <i>NEAT1_2</i> from total <i>NEAT1</i> levels. However, this only estimates the abundance of <i>NEAT1_1.</i> Standard oligo(dT)-primed RT-PCR is not suitable for quantifying <i>NEAT1_1</i> as the longer <i>NEAT1_2</i> sequence contains twelve poly(A) repeats, so unintended priming overestimates <i>NEAT1_1</i> abundance. Here, we report the development of a novel RT-PCR method allowing relative quantification of <i>NEAT1_1</i> independently of <i>NEAT1_2.</i> Using an anchored oligo(dT) primer for reverse transcription enriches cDNA with the <i>NEAT1_1</i> isoform, and the use of a longer primer anchoring sequence at the PCR stage enhances detection specificity. Our method is validated by the successful independent quantification of <i>NEAT1_1</i> following a forced isoform switch using antisense oligomers in both cancer and non-cancer cell lines. Additionally, we have visualized this isoform switch in colorectal cancer cell lines using fluorescent <i>in situ</i> hybridization techniques specific to <i>NEAT1_2</i>-containing paraspeckles.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf073"},"PeriodicalIF":1.3,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12603357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf067
Xiaofeng Ma
Accurate and non-invasive measurement of cell membrane potential is essential for studying physiological processes and disease mechanisms. In this study, we propose a conceptual and computational model for estimating membrane potential based on the electrical behavior of two series-connected capacitors, simulating a cell-attached patch-clamp configuration. We hypothesize that the presence of a net intracellular charge-representing the membrane potential-affects the charging and discharging characteristics of the capacitive circuit by introducing asymmetry in voltage distribution. To test this, we used LTSpice to simulate 202 different capacitor configurations, varying the internal potential from -100 mV to -10 mV in 10 mV increments. For each configuration, we applied voltage pulses and extracted four key current features: maximum and minimum amplitudes, and total charge and discharge durations. These features were used to train a machine learning model (XGBRegressor), which, despite the limited dataset size, demonstrated strong predictive performance (R2 = 0.90, RMSE = 13.79 mV) in estimating the internal potential. Our findings support the hypothesis that membrane potential can be inferred from capacitive current responses in a non-invasive, cell-attached configuration. This simulation-based framework offers a promising foundation for the non-invasive estimation of membrane potential and warrants further validation in experimental systems.
{"title":"Modeling internal charge effects on capacitor dynamics for non-invasive estimation of membrane potential.","authors":"Xiaofeng Ma","doi":"10.1093/biomethods/bpaf067","DOIUrl":"10.1093/biomethods/bpaf067","url":null,"abstract":"<p><p>Accurate and non-invasive measurement of cell membrane potential is essential for studying physiological processes and disease mechanisms. In this study, we propose a conceptual and computational model for estimating membrane potential based on the electrical behavior of two series-connected capacitors, simulating a cell-attached patch-clamp configuration. We hypothesize that the presence of a net intracellular charge-representing the membrane potential-affects the charging and discharging characteristics of the capacitive circuit by introducing asymmetry in voltage distribution. To test this, we used LTSpice to simulate 202 different capacitor configurations, varying the internal potential from -100 mV to -10 mV in 10 mV increments. For each configuration, we applied voltage pulses and extracted four key current features: maximum and minimum amplitudes, and total charge and discharge durations. These features were used to train a machine learning model (XGBRegressor), which, despite the limited dataset size, demonstrated strong predictive performance (<i>R</i> <sup>2</sup> = 0.90, RMSE = 13.79 mV) in estimating the internal potential. Our findings support the hypothesis that membrane potential can be inferred from capacitive current responses in a non-invasive, cell-attached configuration. This simulation-based framework offers a promising foundation for the non-invasive estimation of membrane potential and warrants further validation in experimental systems.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf067"},"PeriodicalIF":1.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12517336/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145293981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-30eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf072
Nathan Dennis, Campbell W Gourlay, Marina Ezcurra
Measurement of the oxygen consumption rate, or respirometry, is a powerful and comprehensive method for assessing mitochondrial function both in vitro and in vivo. Respirometry at the whole-organism level has been repeatedly performed in the model organism Caenorhabditis elegans, typically using high-throughput microplate-based systems over traditional Clark-type respirometers. However, these systems are highly specialized, costly to purchase and operate, and inaccessible to many researchers. Here, we develop a respirometry assay using low-cost commercially available optical oxygen sensors (PreSens OxoPlates®) and fluorescence plate readers (the BMG FLUOstar), as an alternative to more costly standard respirometry systems. This assay uses standard BMG FLUOstar protocols and a set of custom scripts to perform repeated measurements of the C. elegans oxygen consumption rate, with the optional use of respiratory inhibitors or other interventions. We validate this assay by demonstrating the linearity of basal oxygen consumption rates in samples with variable numbers of animals, and by examining the impact of respiratory inhibitors with previously demonstrated efficacy in C. elegans: carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone (a mitochondrial uncoupler) and sodium azide (a Complex IV inhibitor). Using this assay, we demonstrate that the sequential use of FCCP and sodium azide leads to an increase in the sodium azide-treated (non-mitochondrial) oxygen consumption rate, indicating that the sequential use of respiratory inhibitors, as standard in intact cell respirometry, may produce erroneous estimates of non-mitochondrial respiration in C. elegans and thus should be avoided.
{"title":"A novel <i>C. elegans</i> respirometry assay using low-cost optical oxygen sensors.","authors":"Nathan Dennis, Campbell W Gourlay, Marina Ezcurra","doi":"10.1093/biomethods/bpaf072","DOIUrl":"10.1093/biomethods/bpaf072","url":null,"abstract":"<p><p>Measurement of the oxygen consumption rate, or respirometry, is a powerful and comprehensive method for assessing mitochondrial function both <i>in vitro</i> and <i>in vivo</i>. Respirometry at the whole-organism level has been repeatedly performed in the model organism <i>Caenorhabditis elegans</i>, typically using high-throughput microplate-based systems over traditional Clark-type respirometers. However, these systems are highly specialized, costly to purchase and operate, and inaccessible to many researchers. Here, we develop a respirometry assay using low-cost commercially available optical oxygen sensors (PreSens OxoPlates<sup>®</sup>) and fluorescence plate readers (the BMG FLUOstar), as an alternative to more costly standard respirometry systems. This assay uses standard BMG FLUOstar protocols and a set of custom scripts to perform repeated measurements of the <i>C. elegans</i> oxygen consumption rate, with the optional use of respiratory inhibitors or other interventions. We validate this assay by demonstrating the linearity of basal oxygen consumption rates in samples with variable numbers of animals, and by examining the impact of respiratory inhibitors with previously demonstrated efficacy in <i>C. elegans</i>: carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone (a mitochondrial uncoupler) and sodium azide (a Complex IV inhibitor). Using this assay, we demonstrate that the sequential use of FCCP and sodium azide leads to an increase in the sodium azide-treated (non-mitochondrial) oxygen consumption rate, indicating that the sequential use of respiratory inhibitors, as standard in intact cell respirometry, may produce erroneous estimates of non-mitochondrial respiration in <i>C. elegans</i> and thus should be avoided.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf072"},"PeriodicalIF":1.3,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12557035/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-27eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf070
Christian Medina-Gómez, Pilar Elena Núñez-Ortega, Itandehui Castro-Quezada, César Antonio Irecta-Nájera, Ivan Delgado-Enciso, Rosario García-Miranda, Héctor Ochoa-Díaz-López
DNA methylation is an important modification in the genomes, participating in gene expression or gene repression, as a part of epigenetic studies. This modification can be studied with last-generation sequencing or using PCR coupled with High Resolution Melting (HRM). For this, primers used need to be correctly designed, since the use of specific DNA standards is required, which have specific temperatures displayed in the analyses. We propose and show a method for HRM methylation analysis based on targeted-sequences nucleotide proportion, developed in the Health Laboratory in El Colegio de la Frontera Sur (ECOSUR), Chiapas. We found that when DNA nucleotides in the predicted amplicon have a certain proportion (A-T and G-C), melting curves in the HRM analyses behave differently. Besides, other modifications can be made to primers, such as the number of CpG motifs included within the sequence. DNA nucleotide proportion is shown to be an easy but reliable way of doing primer design when other methods are not available, either because of the lack of resources or the unavailability of sequencing equipment. Additionally, this methodological approach could help reduce time and reagent waste during standardization by improving primer selection efficiency in multi-gene studies.
DNA甲基化是基因组中一种重要的修饰,参与基因表达或基因抑制,是表观遗传学研究的一部分。这种修饰可以用上一代测序或PCR结合高分辨率熔融(HRM)来研究。为此,需要正确设计所使用的引物,因为需要使用特定的DNA标准,这些标准在分析中显示特定的温度。我们提出并展示了一种基于靶向序列核苷酸比例的HRM甲基化分析方法,该方法由恰帕斯州El Colegio de la Frontera Sur (ECOSUR)的卫生实验室开发。我们发现,当预测扩增子中的DNA核苷酸具有一定比例(a - t和G-C)时,HRM分析中的熔化曲线表现不同。此外,还可以对引物进行其他修改,例如序列中包含的CpG基序的数量。DNA核苷酸比例被证明是一种简单而可靠的引物设计方法,当其他方法不可用时,要么是因为缺乏资源,要么是因为测序设备不可用。此外,该方法可以通过提高多基因研究的引物选择效率,帮助减少标准化过程中的时间和试剂浪费。
{"title":"Amplicon sequence proportion: A novel method for HRM primer design in DNA methylation analysis among marginalized rural population in Southern Mexico.","authors":"Christian Medina-Gómez, Pilar Elena Núñez-Ortega, Itandehui Castro-Quezada, César Antonio Irecta-Nájera, Ivan Delgado-Enciso, Rosario García-Miranda, Héctor Ochoa-Díaz-López","doi":"10.1093/biomethods/bpaf070","DOIUrl":"10.1093/biomethods/bpaf070","url":null,"abstract":"<p><p>DNA methylation is an important modification in the genomes, participating in gene expression or gene repression, as a part of epigenetic studies. This modification can be studied with last-generation sequencing or using PCR coupled with High Resolution Melting (HRM). For this, primers used need to be correctly designed, since the use of specific DNA standards is required, which have specific temperatures displayed in the analyses. We propose and show a method for HRM methylation analysis based on targeted-sequences nucleotide proportion, developed in the Health Laboratory in El Colegio de la Frontera Sur (ECOSUR), Chiapas. We found that when DNA nucleotides in the predicted amplicon have a certain proportion (A-T and G-C), melting curves in the HRM analyses behave differently. Besides, other modifications can be made to primers, such as the number of CpG motifs included within the sequence. DNA nucleotide proportion is shown to be an easy but reliable way of doing primer design when other methods are not available, either because of the lack of resources or the unavailability of sequencing equipment. Additionally, this methodological approach could help reduce time and reagent waste during standardization by improving primer selection efficiency in multi-gene studies.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf070"},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12587413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145459660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bovine serum albumin (BSA) nanoparticles have attracted a lot of interest as biocompatible and biodegradable carriers for a range of pharmacological and biological uses. BSA nanoparticles have several advantages over other types of nanoparticles, including their ability to increase the stability and solubility of encapsulated drugs, their non-toxicity, and their ease of surface modification. Cancer treatment, immunological modulation, enzyme immobilization, controlled release systems, bioimaging, and theranostics are some of its potential applications. This protocol offers a detailed and accessible methodology for the synthesis, drug encapsulation, and characterization of albumin nanoparticles, with particular emphasis on reproducibility and adaptability. The synthesis uses the desolvation process and crosslinking with the compound glutaraldehyde for stability. The crosslinking ratio, pH, and BSA content are important factors that can be adjusted to control size, surface charge, and dispersity. The methods used for characterization are described in detail, including dynamic light scattering for particle size and zeta potential, transmission and scanning electron microscopy for morphology, Fourier-transform infrared spectroscopy, and nanoparticle tracking analysis for stability assessment. The stability of the nanoparticles was evaluated under physiologically relevant ionic and pH conditions by dispersing them in phosphate-buffered saline, providing insight into their colloidal behavior in a simulated physiological environment. This technique facilitates the design of functionalized BSA nanoparticles for certain biomedical and therapeutic applications by acting as a fundamental reference for researchers. This work promotes innovation in nanoparticle-based technology and advances the field by standardizing preparation and characterization techniques.
{"title":"Engineered BSA nanoparticles: Synthesis, drug loading, and advanced characterization.","authors":"Hemlata, A Hariharan, Nandan Murali, Srabaita Roy, Soutik Betal, Saran Kumar, Shilpi Minocha","doi":"10.1093/biomethods/bpaf066","DOIUrl":"10.1093/biomethods/bpaf066","url":null,"abstract":"<p><p>Bovine serum albumin (BSA) nanoparticles have attracted a lot of interest as biocompatible and biodegradable carriers for a range of pharmacological and biological uses. BSA nanoparticles have several advantages over other types of nanoparticles, including their ability to increase the stability and solubility of encapsulated drugs, their non-toxicity, and their ease of surface modification. Cancer treatment, immunological modulation, enzyme immobilization, controlled release systems, bioimaging, and theranostics are some of its potential applications. This protocol offers a detailed and accessible methodology for the synthesis, drug encapsulation, and characterization of albumin nanoparticles, with particular emphasis on reproducibility and adaptability. The synthesis uses the desolvation process and crosslinking with the compound glutaraldehyde for stability. The crosslinking ratio, pH, and BSA content are important factors that can be adjusted to control size, surface charge, and dispersity. The methods used for characterization are described in detail, including dynamic light scattering for particle size and zeta potential, transmission and scanning electron microscopy for morphology, Fourier-transform infrared spectroscopy, and nanoparticle tracking analysis for stability assessment. The stability of the nanoparticles was evaluated under physiologically relevant ionic and pH conditions by dispersing them in phosphate-buffered saline, providing insight into their colloidal behavior in a simulated physiological environment. This technique facilitates the design of functionalized BSA nanoparticles for certain biomedical and therapeutic applications by acting as a fundamental reference for researchers. This work promotes innovation in nanoparticle-based technology and advances the field by standardizing preparation and characterization techniques.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf066"},"PeriodicalIF":1.3,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12466926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145186923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf069
Priyanshi Shah, Arun Sethuraman
Cancer remains one of the most complex diseases faced by humanity, with over 200 distinct types, each characterized by unique molecular profiles that demand specialized therapeutic approaches. [Tomczak et al. (Review The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge. Współczesna Onkol 2015;1A:68-77.)] Prior studies have shown that both short and long telomere lengths (TLs) are associated with elevated cancer risk, underscoring the intricate relationship between TL variation and tumorigenesis. [Haycock et al. (Association between telomere length and risk of cancer and non-neoplastic diseases: a Mendelian randomization study. JAMA Oncol 2017;3:636-51.)] To investigate this relationship, we developed a supervised machine learning model trained on telomeric read content, genomic variants, and phenotypic features to predict tumor status. Using data from 33 cancer types within The Cancer Genome Atlas (TCGA) program, our model achieved an accuracy of 82.62% in predicting tumor status. The trained model is available for public use and further development through the project's GitHub repository: https://github.com/paribytes/TeloQuest. This work represents a novel, multidisciplinary approach to improving cancer diagnostics and risk assessment by integrating telomere biology with Biobank-scale genomic and phenotypic data. Furthermore, we highlight the potential of TL variation as a meaningful predictive biomarker in oncology.
癌症仍然是人类面临的最复杂的疾病之一,有200多种不同的类型,每种类型都有独特的分子特征,需要专门的治疗方法。[Tomczak et al.]回顾癌症基因组图谱(TCGA):一个不可估量的知识来源。Współczesna Onkol 2015;1A:68-77.)]先前的研究表明,短端粒长度和长端粒长度(TLs)都与癌症风险升高有关,强调了TL变异与肿瘤发生之间的复杂关系。端粒长度与癌症和非肿瘤性疾病风险之间的关系:一项孟德尔随机研究。为了研究这种关系,我们开发了一个基于端粒读取内容、基因组变异和表型特征训练的监督机器学习模型来预测肿瘤状态。使用来自癌症基因组图谱(TCGA)计划中的33种癌症类型的数据,我们的模型在预测肿瘤状态方面达到了82.62%的准确率。经过训练的模型可通过该项目的GitHub存储库:https://github.com/paribytes/TeloQuest供公众使用和进一步开发。这项工作代表了一种新的、多学科的方法,通过将端粒生物学与生物银行规模的基因组和表型数据相结合,来改善癌症诊断和风险评估。此外,我们强调了TL变异作为肿瘤学中有意义的预测性生物标志物的潜力。
{"title":"A novel machine learning approach for tumor detection based on telomeric signatures.","authors":"Priyanshi Shah, Arun Sethuraman","doi":"10.1093/biomethods/bpaf069","DOIUrl":"10.1093/biomethods/bpaf069","url":null,"abstract":"<p><p>Cancer remains one of the most complex diseases faced by humanity, with over 200 distinct types, each characterized by unique molecular profiles that demand specialized therapeutic approaches. [Tomczak et al. (Review The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge. <i>Współczesna Onkol</i> 2015;<b>1A</b>:68-77.)] Prior studies have shown that both short and long telomere lengths (TLs) are associated with elevated cancer risk, underscoring the intricate relationship between TL variation and tumorigenesis. [Haycock <i>et al</i>. (Association between telomere length and risk of cancer and non-neoplastic diseases: a Mendelian randomization study. <i>JAMA Oncol</i> 2017;<b>3</b>:636-51.)] To investigate this relationship, we developed a supervised machine learning model trained on telomeric read content, genomic variants, and phenotypic features to predict tumor status. Using data from 33 cancer types within The Cancer Genome Atlas (TCGA) program, our model achieved an accuracy of 82.62% in predicting tumor status. The trained model is available for public use and further development through the project's GitHub repository: https://github.com/paribytes/TeloQuest. This work represents a novel, multidisciplinary approach to improving cancer diagnostics and risk assessment by integrating telomere biology with Biobank-scale genomic and phenotypic data. Furthermore, we highlight the potential of TL variation as a meaningful predictive biomarker in oncology.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf069"},"PeriodicalIF":1.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12627404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable secretome analysis is crucial for understanding cellular communication and developing therapeutic strategies. However, conventional protein quantification methods, such as the bicinchoninic acid (BCA) assay, can overestimate protein concentrations in concentrated culture media, leading to inconsistent protein loading and compromised quantitative accuracy in mass spectrometry-based proteomics. To address this methodological challenge, we developed an improved sample preparation method for secretome analysis. Our approach introduces a concentration rate-based normalization method that adjusts sample volumes according to the ultrafiltration concentration ratio, ensuring more consistent protein loading across samples. This method enabled reliable identification of 3468 secreted proteins with high reproducibility (r > 0.93) in a model system of nuclear DNA (nucDNA)-induced inflammation in HeLa cells. Secretome profiles were distinctly altered by nucDNA transfection, with 89 proteins showing significant differential release between control and nucDNA-transfected wild-type HeLa cells. Furthermore, we identified a subset of proteins, including chaperone and proteasome complexes, that were consistently released across all conditions, suggesting their potential utility as internal controls for secretome analysis. This study presents a practical solution to the methodological challenge in secretome analysis, enabling more reliable and reproducible secretome profiling. This improved methodology represents an important step toward establishing standardized protocols for secretome analysis, ultimately enhancing the quality and comparability of research in this rapidly growing field.
{"title":"A robust protocol for proteomic profiling of secreted proteins in conditioned culture medium.","authors":"Takayoshi Otsuka, Atsushi Hatano, Masaki Matsumoto, Hideaki Matsui","doi":"10.1093/biomethods/bpaf068","DOIUrl":"10.1093/biomethods/bpaf068","url":null,"abstract":"<p><p>Reliable secretome analysis is crucial for understanding cellular communication and developing therapeutic strategies. However, conventional protein quantification methods, such as the bicinchoninic acid (BCA) assay, can overestimate protein concentrations in concentrated culture media, leading to inconsistent protein loading and compromised quantitative accuracy in mass spectrometry-based proteomics. To address this methodological challenge, we developed an improved sample preparation method for secretome analysis. Our approach introduces a concentration rate-based normalization method that adjusts sample volumes according to the ultrafiltration concentration ratio, ensuring more consistent protein loading across samples. This method enabled reliable identification of 3468 secreted proteins with high reproducibility (<i>r</i> > 0.93) in a model system of nuclear DNA (nucDNA)-induced inflammation in HeLa cells. Secretome profiles were distinctly altered by nucDNA transfection, with 89 proteins showing significant differential release between control and nucDNA-transfected wild-type HeLa cells. Furthermore, we identified a subset of proteins, including chaperone and proteasome complexes, that were consistently released across all conditions, suggesting their potential utility as internal controls for secretome analysis. This study presents a practical solution to the methodological challenge in secretome analysis, enabling more reliable and reproducible secretome profiling. This improved methodology represents an important step toward establishing standardized protocols for secretome analysis, ultimately enhancing the quality and comparability of research in this rapidly growing field.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf068"},"PeriodicalIF":1.3,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12461699/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145186973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-02eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf065
Duc-Hau Le
MicroRNAs (miRNAs) play a critical role in disease mechanisms, making the identification of disease-associated miRNAs essential for precision medicine. We propose a novel computational method, multiplex-heterogeneous network for MiRNA-disease associations (MHMDA), which integrates multiple miRNA functional similarity networks and a disease similarity network into a multiplex-heterogeneous network. This approach employs a tailored random walk with restart algorithm to predict disease-miRNA associations, leveraging the complementary information from experimentally validated and predicted miRNA-target interactions, as well as disease phenotypic similarities. Evaluated on the human microRNA disease database and miR2Disease datasets using leave-one-out cross-validation and 5-fold cross-validation, MHMDA demonstrates superior performance, achieving area under the receiver operating characteristic curve values of 0.938 and 0.913 on human microRNA disease database and miR2Disease, respectively, and outperforming existing methods. The integration of multiplex networks enhances prediction accuracy by capturing diverse miRNA functional relationships, which directly contributes to the high area under the receiver operating characteristic curve and area under the precision-recall curve values observed. Additionally, MHMDA's stability across parameter variations and disease contexts underscores its robustness and potential for real-world applications in identifying novel disease-miRNA associations.
{"title":"Integrating multiple microRNA functional similarity networks for improved disease-microRNA association prediction.","authors":"Duc-Hau Le","doi":"10.1093/biomethods/bpaf065","DOIUrl":"10.1093/biomethods/bpaf065","url":null,"abstract":"<p><p>MicroRNAs (miRNAs) play a critical role in disease mechanisms, making the identification of disease-associated miRNAs essential for precision medicine. We propose a novel computational method, multiplex-heterogeneous network for MiRNA-disease associations (MHMDA), which integrates multiple miRNA functional similarity networks and a disease similarity network into a multiplex-heterogeneous network. This approach employs a tailored random walk with restart algorithm to predict disease-miRNA associations, leveraging the complementary information from experimentally validated and predicted miRNA-target interactions, as well as disease phenotypic similarities. Evaluated on the human microRNA disease database and miR2Disease datasets using leave-one-out cross-validation and 5-fold cross-validation, MHMDA demonstrates superior performance, achieving area under the receiver operating characteristic curve values of 0.938 and 0.913 on human microRNA disease database and miR2Disease, respectively, and outperforming existing methods. The integration of multiplex networks enhances prediction accuracy by capturing diverse miRNA functional relationships, which directly contributes to the high area under the receiver operating characteristic curve and area under the precision-recall curve values observed. Additionally, MHMDA's stability across parameter variations and disease contexts underscores its robustness and potential for real-world applications in identifying novel disease-miRNA associations.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf065"},"PeriodicalIF":1.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12410926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145016322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf060
Masahiro Ono
Fluorescent Timer proteins undergo a time-dependent shift from blue to red fluorescence after translation, providing a temporal record of transcriptional activity in Timer reporter systems. While Timer proteins are well suited for studying dynamic cellular processes such as T cell activation using the Timer-of-Cell-Kinetics-and-Activity (Tocky) framework, quantitative analysis of Timer-based flow cytometry data has yet to be fully standardized. In this study, we optimize quantitative analysis methods for the key parameter within the Tocky framework, Timer Angle, and introduce TockyLocus, an open-source package that implements a five-category scheme based on biologically grounded angular intervals (designated as Tocky Loci). This approach is validated using both simulated and experimental datasets and enables downstream statistical testing and visualization of transcriptional dynamics in flow cytometry data. Using computational modelling of Timer protein kinetics, we define transcriptional dynamics in relation to key anchoring points in Timer Angle values at , , and . Comprehensive simulations with synthetic spike-in datasets further demonstrate the robustness of the five-locus approach, which captures the three key points and the intermediate regions between these points. Building on the TockyPrep preprocessing framework, we systematically evaluated categorization schemes ranging from three to seven loci on real-world datasets from Nr4a3-Tocky and Foxp3-Tocky mice. The five-locus model emerged as optimal, showing significant advantages in balancing biological interpretability and statistical robustness. Optimized algorithms implemented in the TockyLocus package now standardize quantitative analysis of Timer Angle data, enabling reproducible interpretation without reliance on arbitrary gating or complex assumptions. In summary, the five-locus categorization of Timer Angle data effectively links underlying biological dynamics to the percentage of cells in each Tocky Locus, providing a robust and interpretable framework for investigating transcriptional dynamics in immunology and related fields.
{"title":"TockyLocus: quantitative analysis of flow cytometric fluorescent timer data in Nr4a3-Tocky and Foxp3-Tocky mice.","authors":"Masahiro Ono","doi":"10.1093/biomethods/bpaf060","DOIUrl":"10.1093/biomethods/bpaf060","url":null,"abstract":"<p><p>Fluorescent Timer proteins undergo a time-dependent shift from blue to red fluorescence after translation, providing a temporal record of transcriptional activity in Timer reporter systems. While Timer proteins are well suited for studying dynamic cellular processes such as T cell activation using the Timer-of-Cell-Kinetics-and-Activity (Tocky) framework, quantitative analysis of Timer-based flow cytometry data has yet to be fully standardized. In this study, we optimize quantitative analysis methods for the key parameter within the Tocky framework, Timer Angle, and introduce TockyLocus, an open-source <math><mi>R</mi></math> package that implements a five-category scheme based on biologically grounded angular intervals (designated as Tocky Loci). This approach is validated using both simulated and experimental datasets and enables downstream statistical testing and visualization of transcriptional dynamics in flow cytometry data. Using computational modelling of Timer protein kinetics, we define transcriptional dynamics in relation to key anchoring points in Timer Angle values at <math> <mrow> <mrow> <msup><mrow><mn>0</mn></mrow> <mo>°</mo></msup> </mrow> </mrow> </math> , <math> <mrow> <mrow> <msup> <mrow><mrow><mn>45</mn></mrow> </mrow> <mo>°</mo></msup> </mrow> </mrow> </math> , and <math> <mrow> <mrow> <msup> <mrow><mrow><mn>90</mn></mrow> </mrow> <mo>°</mo></msup> </mrow> </mrow> </math> . Comprehensive simulations with synthetic spike-in datasets further demonstrate the robustness of the five-locus approach, which captures the three key points and the intermediate regions between these points. Building on the TockyPrep preprocessing framework, we systematically evaluated categorization schemes ranging from three to seven loci on real-world datasets from Nr4a3-Tocky and Foxp3-Tocky mice. The five-locus model emerged as optimal, showing significant advantages in balancing biological interpretability and statistical robustness. Optimized algorithms implemented in the TockyLocus package now standardize quantitative analysis of Timer Angle data, enabling reproducible interpretation without reliance on arbitrary gating or complex assumptions. In summary, the five-locus categorization of Timer Angle data effectively links underlying biological dynamics to the percentage of cells in each Tocky Locus, providing a robust and interpretable framework for investigating transcriptional dynamics in immunology and related fields.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf060"},"PeriodicalIF":1.3,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-21eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf064
Xiaoyu Duan, Vipul Periwal
Inferring nonlinear dynamics and parameters in biological data modeling is challenging. Standard parameter optimization methods are difficult to constrain to biological ranges, especially for nonlinear models. We propose a novel method to evaluate and improve putative models using neural networks to simultaneously address biological modeling, parametrization, and parameter inference. As an example, utilizing data from clinical frequently sampled intravenous glucose tolerance testing, we introduce two physiological lipolysis models of glucose, insulin, and free fatty acids dynamics. Parameter values are obtained via optimization from the limited clinical data. We then generate simulated data from the model by sampling parameters within physiological ranges while ensuring that the joint parameter distributions are physiologically appropriate. A convolutional neural network is trained to take the simulated glucose, insulin, and free fatty acids time courses as input and output of the model parameters. We evaluate the performance of the trained neural network for both parameter inference and trajectory reconstruction using a testing dataset, optimized model-fitting curves, and real physiological data and show that it enables accurate inference across all three settings. The trained neural network produces consistently high R2 values and low P-values across different feature engineering strategies and training dataset sizes. We assess the impact of feature engineering choices and training dataset size on inference performance, demonstrating that appropriately designed feature transformations and specific activation function choices improve accuracy. Our results establish a deep learning framework for parameter inference in mathematical models, which can be adapted to various physiological systems.
{"title":"Deep learning approach to parameter optimization for physiological models.","authors":"Xiaoyu Duan, Vipul Periwal","doi":"10.1093/biomethods/bpaf064","DOIUrl":"10.1093/biomethods/bpaf064","url":null,"abstract":"<p><p>Inferring nonlinear dynamics and parameters in biological data modeling is challenging. Standard parameter optimization methods are difficult to constrain to biological ranges, especially for nonlinear models. We propose a novel method to evaluate and improve putative models using neural networks to simultaneously address biological modeling, parametrization, and parameter inference. As an example, utilizing data from clinical frequently sampled intravenous glucose tolerance testing, we introduce two physiological lipolysis models of glucose, insulin, and free fatty acids dynamics. Parameter values are obtained via optimization from the limited clinical data. We then generate simulated data from the model by sampling parameters within physiological ranges while ensuring that the joint parameter distributions are physiologically appropriate. A convolutional neural network is trained to take the simulated glucose, insulin, and free fatty acids time courses as input and output of the model parameters. We evaluate the performance of the trained neural network for both parameter inference and trajectory reconstruction using a testing dataset, optimized model-fitting curves, and real physiological data and show that it enables accurate inference across all three settings. The trained neural network produces consistently high <i>R</i> <sup>2</sup> values and low <i>P</i>-values across different feature engineering strategies and training dataset sizes. We assess the impact of feature engineering choices and training dataset size on inference performance, demonstrating that appropriately designed feature transformations and specific activation function choices improve accuracy. Our results establish a deep learning framework for parameter inference in mathematical models, which can be adapted to various physiological systems.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf064"},"PeriodicalIF":1.3,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12553359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145373013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}