Pub Date : 2025-11-26eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf078
Malte Willmes, Anders Varmann Aamodt, Børge Solli Andreassen, Lina Victoria Tuddenham Haug, Enghild Steinkjer, Gunnel M Østborg, Gitte Løkeberg, Peder Fiske, Geir R Brandt, Terje Mikalsen, Arne Siversten, Magnus Moustache, June Larsen Ydsti, Bjørn Florø-Larsen
Escaped farmed salmon are a major concern for wild Atlantic salmon (Salmo salar) stocks in Norway. Fish scale analysis is a well-established method for distinguishing farmed from wild fish, but the process is labor and time intensive. Deep learning has recently been shown to automate this task with high accuracy, though typically on relatively small and geographically limited datasets. Here we train and validate a new convolutional neural network on nearly 90 000 scale images from two national archives, encompassing heterogeneous imaging protocols, hundreds of rivers, and time series extending back to the 1930s. The model achieved an F1 score of 0.95 on a large, independent test set, with predictions closely matching both genetic reference samples and known farmed-origin scales. By developing and testing this new model on a large and diverse dataset, we demonstrate that deep learning generalizes robustly across ecological and methodological contexts, supporting its use as a validated, large-scale tool for monitoring escaped farmed salmon.
{"title":"Identifying escaped farmed salmon from fish scales using deep learning.","authors":"Malte Willmes, Anders Varmann Aamodt, Børge Solli Andreassen, Lina Victoria Tuddenham Haug, Enghild Steinkjer, Gunnel M Østborg, Gitte Løkeberg, Peder Fiske, Geir R Brandt, Terje Mikalsen, Arne Siversten, Magnus Moustache, June Larsen Ydsti, Bjørn Florø-Larsen","doi":"10.1093/biomethods/bpaf078","DOIUrl":"10.1093/biomethods/bpaf078","url":null,"abstract":"<p><p>Escaped farmed salmon are a major concern for wild Atlantic salmon (<i>Salmo salar</i>) stocks in Norway. Fish scale analysis is a well-established method for distinguishing farmed from wild fish, but the process is labor and time intensive. Deep learning has recently been shown to automate this task with high accuracy, though typically on relatively small and geographically limited datasets. Here we train and validate a new convolutional neural network on nearly 90 000 scale images from two national archives, encompassing heterogeneous imaging protocols, hundreds of rivers, and time series extending back to the 1930s. The model achieved an F1 score of 0.95 on a large, independent test set, with predictions closely matching both genetic reference samples and known farmed-origin scales. By developing and testing this new model on a large and diverse dataset, we demonstrate that deep learning generalizes robustly across ecological and methodological contexts, supporting its use as a validated, large-scale tool for monitoring escaped farmed salmon.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf078"},"PeriodicalIF":1.3,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12647055/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145640650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf088
Ravi Shankar, Fiona Devi, Xu Qian
The integration of computational methods with traditional qualitative research has emerged as a transformative paradigm in healthcare research. Computational Grounded Theory (CGT) combines the interpretive depth of grounded theory with computational techniques including machine learning and natural language processing. This systematic review examines CGT application in healthcare research through analysis of eight studies demonstrating the method's utility across diverse contexts. Following systematic search across five databases and PRISMA-aligned screening, eight papers applying CGT in healthcare were analyzed. Studies spanned COVID-19 risk perception, medical AI adoption, mental health interventions, diabetes management, women's health technology, online health communities, and social welfare systems, employing computational techniques including Latent Dirichlet Allocation (LDA), sentiment analysis, word embeddings, and deep learning algorithms. Results demonstrate CGT's capacity for analyzing large-scale textual data (100 000+ documents) while maintaining theoretical depth, with consistent reports of enhanced analytical capacity, latent pattern identification, and novel theoretical insights. However, challenges include technical complexity, interpretation validity, resource requirements, and need for interdisciplinary expertise. CGT represents a promising methodological innovation for healthcare research, particularly for understanding complex phenomena, patient experiences, and technology adoption, though the small sample size (8 of 892 screened articles) reflects its nascent application and limits generalizability. CGT represents a promising methodological innovation for healthcare research, particularly valuable for understanding complex healthcare phenomena, patient experiences, and technology adoption. The small sample size (8 of 892 screened articles) reflects CGT's nascent application in healthcare, limiting generalizability. Future research should focus on standardizing methodological procedures, developing best practices, expanding applications, and addressing accessibility barriers.
{"title":"A systematic review of the application of computational grounded theory method in healthcare research.","authors":"Ravi Shankar, Fiona Devi, Xu Qian","doi":"10.1093/biomethods/bpaf088","DOIUrl":"10.1093/biomethods/bpaf088","url":null,"abstract":"<p><p>The integration of computational methods with traditional qualitative research has emerged as a transformative paradigm in healthcare research. Computational Grounded Theory (CGT) combines the interpretive depth of grounded theory with computational techniques including machine learning and natural language processing. This systematic review examines CGT application in healthcare research through analysis of eight studies demonstrating the method's utility across diverse contexts. Following systematic search across five databases and PRISMA-aligned screening, eight papers applying CGT in healthcare were analyzed. Studies spanned COVID-19 risk perception, medical AI adoption, mental health interventions, diabetes management, women's health technology, online health communities, and social welfare systems, employing computational techniques including Latent Dirichlet Allocation (LDA), sentiment analysis, word embeddings, and deep learning algorithms. Results demonstrate CGT's capacity for analyzing large-scale textual data (100 000+ documents) while maintaining theoretical depth, with consistent reports of enhanced analytical capacity, latent pattern identification, and novel theoretical insights. However, challenges include technical complexity, interpretation validity, resource requirements, and need for interdisciplinary expertise. CGT represents a promising methodological innovation for healthcare research, particularly for understanding complex phenomena, patient experiences, and technology adoption, though the small sample size (8 of 892 screened articles) reflects its nascent application and limits generalizability. CGT represents a promising methodological innovation for healthcare research, particularly valuable for understanding complex healthcare phenomena, patient experiences, and technology adoption. The small sample size (8 of 892 screened articles) reflects CGT's nascent application in healthcare, limiting generalizability. Future research should focus on standardizing methodological procedures, developing best practices, expanding applications, and addressing accessibility barriers.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf088"},"PeriodicalIF":1.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12744390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145858116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf087
Natalie A Holroyd, Zhongwang Li, Claire Walsh, Emmeline Brown, Rebecca J Shipley, Simon Walker-Samuel
Deep learning has become an invaluable tool for bioimage analysis but, while open-source cell annotation software such as Cellpose is widely used, an equivalent tool for three-dimensional (3D) vascular annotation does not exist. With the vascular system being directly impacted by a broad range of diseases, there is significant medical interest in quantitative analysis for vascular imaging. We present a new deep learning model, coupled with a human-in-the-loop training approach, for segmentation of vasculature that is generalizable across tissues, modalities, scales, and pathologies. To create a generalizable model, a 3D convolutional neural network was trained using curated data from modalities including optical imaging, computational tomography, and photoacoustic imaging. Through this varied training set, the model was forced to learn common features of vessels' cross-modality and scale. Following this, the pre-trained 'foundation' model was fine-tuned to different applications with a minimal amount of manually labelled ground truth data. It was found that the foundation model could be specialized to a new datasets using as little as 0.3% of the volume of said dataset for fine-tuning. The fine-tuned model was able to segment 3D vasculature with a high level of accuracy (DICE coefficient between 0.81 and 0.98) across a range of applications. These results show a general model trained on a highly varied data catalogue can be specialized to new applications with minimal human input. This model and training approach enables users to produce accurate segmentations of 3D vascular networks without the need to label large amounts of training data.
{"title":"tUbeNet: a generalizable deep learning tool for 3D vessel segmentation.","authors":"Natalie A Holroyd, Zhongwang Li, Claire Walsh, Emmeline Brown, Rebecca J Shipley, Simon Walker-Samuel","doi":"10.1093/biomethods/bpaf087","DOIUrl":"10.1093/biomethods/bpaf087","url":null,"abstract":"<p><p>Deep learning has become an invaluable tool for bioimage analysis but, while open-source cell annotation software such as Cellpose is widely used, an equivalent tool for three-dimensional (3D) vascular annotation does not exist. With the vascular system being directly impacted by a broad range of diseases, there is significant medical interest in quantitative analysis for vascular imaging. We present a new deep learning model, coupled with a human-in-the-loop training approach, for segmentation of vasculature that is generalizable across tissues, modalities, scales, and pathologies. To create a generalizable model, a 3D convolutional neural network was trained using curated data from modalities including optical imaging, computational tomography, and photoacoustic imaging. Through this varied training set, the model was forced to learn common features of vessels' cross-modality and scale. Following this, the pre-trained 'foundation' model was fine-tuned to different applications with a minimal amount of manually labelled ground truth data. It was found that the foundation model could be specialized to a new datasets using as little as 0.3% of the volume of said dataset for fine-tuning. The fine-tuned model was able to segment 3D vasculature with a high level of accuracy (DICE coefficient between 0.81 and 0.98) across a range of applications. These results show a general model trained on a highly varied data catalogue can be specialized to new applications with minimal human input. This model and training approach enables users to produce accurate segmentations of 3D vascular networks without the need to label large amounts of training data.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf087"},"PeriodicalIF":1.3,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12679403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf081
Suleiman Danladi, Ayinde Abdulwahab Adeniyi, Zainab Iman Sani, Adegbenro Temitope
HIV is a global public health challenge. The Reverse Transcriptase (RT) enzyme facilitates an important step in HIV replication. Inhibition of this enzyme provides a critical target for HIV treatment. The aim of this study is to employ computational techniques to screen bioactive compounds from different medicinal plants toward identifying potent HIV-1 RT inhibitors better activity than the current ones. We conducted a literature review of HIV-1 RT inhibitors, and eighty-four (84) compounds, while target receptor (1REV) was retrieved from Protein Data Bank. The molecular docking and Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) evaluations were performed using the Maestro Schrodinger software user interface. The drug-likeness and pharmacokinetic profile evaluation were carried out using SwissADME and ADMETlab3.0 web servers. Lastly, molecular dynamics simulation study was conducted using the Desmond tool of Schrodinger. The molecular docking study revealed that Rosmarinic acid (-13.265 kcal/mol), Evafirenz/standard drug (-12.175 kcal/mol), Arctigenin (-11.322 kcal/mol), Luteolin (-11.274 kcal/mol), Anolignan A (-11.157 kcal/mol), and Quercetin (-11.129 kcal/mol) can effectively bind with high affinity and low energy values to the HIV-1 RT enzyme. The relative binding free energies of Rosmarinic acid, Evafirenz, Arctigenin, Luteolin, Anolignan A, and Quercetin were -66.85, -66.53, -51.83, -49.77, -58.17, and -49.62 Δg bind, respectively. The ADMET profile of Arctigenin was similar to that of Efavirenz, and better than that of other top compounds. The molecular dynamics simulation study showed better stability of rosmarinic acid with the active site of HIV-1 NNRT than the cocrystalized ligand. Out of the top five compounds identified in this study, Rosmarinic acid, a current inhibitor of HIV-1 RT in vitro, showed the most promising prediction. However, further in vivo studies and human clinical trials are required to provide more concrete information regarding its efficacy as potent HIV-1 RT inhibitors.
{"title":"In-silico identification of phytochemical compounds from various medicinal plants as potent HIV-1 non-nucleoside reverse transcriptase inhibitors utilizing molecular docking and molecular dynamics simulations.","authors":"Suleiman Danladi, Ayinde Abdulwahab Adeniyi, Zainab Iman Sani, Adegbenro Temitope","doi":"10.1093/biomethods/bpaf081","DOIUrl":"10.1093/biomethods/bpaf081","url":null,"abstract":"<p><p>HIV is a global public health challenge. The Reverse Transcriptase (RT) enzyme facilitates an important step in HIV replication. Inhibition of this enzyme provides a critical target for HIV treatment. The aim of this study is to employ computational techniques to screen bioactive compounds from different medicinal plants toward identifying potent HIV-1 RT inhibitors better activity than the current ones. We conducted a literature review of HIV-1 RT inhibitors, and eighty-four (84) compounds, while target receptor (1REV) was retrieved from Protein Data Bank. The molecular docking and Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) evaluations were performed using the Maestro Schrodinger software user interface. The drug-likeness and pharmacokinetic profile evaluation were carried out using SwissADME and ADMETlab3.0 web servers. Lastly, molecular dynamics simulation study was conducted using the Desmond tool of Schrodinger. The molecular docking study revealed that Rosmarinic acid (-13.265 kcal/mol), Evafirenz/standard drug (-12.175 kcal/mol), Arctigenin (-11.322 kcal/mol), Luteolin (-11.274 kcal/mol), Anolignan A (-11.157 kcal/mol), and Quercetin (-11.129 kcal/mol) can effectively bind with high affinity and low energy values to the HIV-1 RT enzyme. The relative binding free energies of Rosmarinic acid, Evafirenz, Arctigenin, Luteolin, Anolignan A, and Quercetin were -66.85, -66.53, -51.83, -49.77, -58.17, and -49.62 Δg bind, respectively. The ADMET profile of Arctigenin was similar to that of Efavirenz, and better than that of other top compounds. The molecular dynamics simulation study showed better stability of rosmarinic acid with the active site of HIV-1 NNRT than the cocrystalized ligand. Out of the top five compounds identified in this study, Rosmarinic acid, a current inhibitor of HIV-1 RT in vitro, showed the most promising prediction. However, further in vivo studies and human clinical trials are required to provide more concrete information regarding its efficacy as potent HIV-1 RT inhibitors.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf081"},"PeriodicalIF":1.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12619908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf085
Azwa Suraya Mohd Dan, Adam Linoby, Sazzli Shahlan Kasim, Sufyan Zaki, Razif Sazali, Yusandra Yusoff, Zulqarnain Nasir, Amrun Haziq Abidin
The potential of artificial intelligence (AI) to personalize dietary and exercise advice for obesity management is increasingly evident. However, the effectiveness and appropriateness of AI-generated recommendations hinge significantly on input quality and structured guidance. Despite growing interest, there remains a notable gap regarding a robust and validated prompt-generation mechanism designed explicitly for obesity-related lifestyle planning. This study aimed to evaluate and refine the quality of a personalized AI-driven framework (NExGEN-ChatGPT) for dietary and exercise prescriptions in obese adults, employing the Fuzzy Delphi Method (FDM) to capture and integrate expert consensus. A multidisciplinary expert panel, comprising 21 professionals from nutrition, medicine, psychology, fitness, and AI domains, was engaged in this study. Using structured questionnaires, the experts systematically assessed and refined six primary constructs, further detailed into several evaluative elements, resulting in the consensus validation of 111 specific criteria. Findings identified critical consensus-driven standards essential for personalized, safe, and feasible obesity management through AI. Moreover, the study revealed prioritized criteria pivotal for maintaining practical relevance, safety, and high-quality personalized recommendations. Consequently, this validated framework provides a substantial foundation for subsequent real-world application and further research, thereby enhancing the effectiveness, scalability, and individualization of obesity interventions leveraging AI.
{"title":"Validation of a personalized AI prompt generator (NExGEN-ChatGPT) for obesity management using fuzzy Delphi method.","authors":"Azwa Suraya Mohd Dan, Adam Linoby, Sazzli Shahlan Kasim, Sufyan Zaki, Razif Sazali, Yusandra Yusoff, Zulqarnain Nasir, Amrun Haziq Abidin","doi":"10.1093/biomethods/bpaf085","DOIUrl":"10.1093/biomethods/bpaf085","url":null,"abstract":"<p><p>The potential of artificial intelligence (AI) to personalize dietary and exercise advice for obesity management is increasingly evident. However, the effectiveness and appropriateness of AI-generated recommendations hinge significantly on input quality and structured guidance. Despite growing interest, there remains a notable gap regarding a robust and validated prompt-generation mechanism designed explicitly for obesity-related lifestyle planning. This study aimed to evaluate and refine the quality of a personalized AI-driven framework (NExGEN-ChatGPT) for dietary and exercise prescriptions in obese adults, employing the Fuzzy Delphi Method (FDM) to capture and integrate expert consensus. A multidisciplinary expert panel, comprising 21 professionals from nutrition, medicine, psychology, fitness, and AI domains, was engaged in this study. Using structured questionnaires, the experts systematically assessed and refined six primary constructs, further detailed into several evaluative elements, resulting in the consensus validation of 111 specific criteria. Findings identified critical consensus-driven standards essential for personalized, safe, and feasible obesity management through AI. Moreover, the study revealed prioritized criteria pivotal for maintaining practical relevance, safety, and high-quality personalized recommendations. Consequently, this validated framework provides a substantial foundation for subsequent real-world application and further research, thereby enhancing the effectiveness, scalability, and individualization of obesity interventions leveraging AI.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf085"},"PeriodicalIF":1.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657132/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-08eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf076
[This corrects the article DOI: 10.1093/biomethods/bpaf040.].
[这更正了文章DOI: 10.1093/ biomemethods / bpaaf040 .]。
{"title":"Correction to: AllerTrans: a deep learning method for predicting the allergenicity of protein sequences.","authors":"","doi":"10.1093/biomethods/bpaf076","DOIUrl":"10.1093/biomethods/bpaf076","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1093/biomethods/bpaf040.].</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf076"},"PeriodicalIF":1.3,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12596721/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145490533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf083
Andre Kumar, Evan Baum, Caitlin Parmer-Chow, John Kugler
The global shortage of sonographers has created significant barriers to timely ultrasound diagnostics across medical specialties. Deep learning (DL) algorithms have potential to enhance image acquisition by clinicians without formal sonography training, potentially expanding access to crucial diagnostic imaging in resource-limited settings. This study evaluates whether DL-enabled devices improve acquisition of multi-view limited echocardiograms by healthcare providers without previous cardiac ultrasound training. In this single-center randomized controlled trial (2023-2024), internal medicine residents (N = 38) without prior sonography training received a portable ultrasound device with (N = 19) or without (N = 19) DL capability for a two-week clinical integration period during regular patient care on hospital wards. The DL software provided real-time guidance for probe positioning and image quality assessment across five standard echocardiographic views. The primary outcome was total acquisition time for a comprehensive five-view limited echocardiogram (parasternal long axis, parasternal short axis, apical 4-chamber, subcostal, and inferior vena cava views). Assessments occurred at randomization and after two weeks using a standardized patient. Secondary outcomes included image quality using a validated assessment tool and participant attitudes toward the technology. Baseline scan times and image quality scores were comparable between groups. At two-week follow-up, participants using DL-equipped devices demonstrated significantly faster total scan times (152 s [IQR 115-195] versus 266 s [IQR 206-324]; P < 0.001; Cohen's D = 1.7) and superior image quality with higher modified RACE scores (15 [IQR 10-18] versus 11 [IQR 7-13.5]; P = 0.034; Cohen's D = 0.84). Performance improvements were most pronounced in technically challenging views. Both groups reported similar levels of trust in DL-functionality. Ultrasound devices incorporating deep learning algorithms significantly improve both acquisition speed and image quality of comprehensive echocardiographic examinations by novice users. These findings suggest DL-enhanced ultrasound may help address critical gaps in diagnostic imaging capacity by enabling non-specialists to acquire clinically useful cardiac images.
超声医师的全球短缺对跨医学专业的及时超声诊断造成了重大障碍。深度学习(DL)算法有可能提高没有经过正规超声训练的临床医生的图像采集能力,在资源有限的情况下,有可能扩大对关键诊断成像的访问。本研究评估了dl启用设备是否改善了医疗保健提供者在没有心脏超声培训的情况下获得多视图有限超声心动图。在这项单中心随机对照试验(2023-2024)中,未接受超声检查培训的内科住院医师(N = 38)在医院病房常规病人护理期间接受了为期两周的便携式超声设备(N = 19)或不具备DL功能(N = 19)。DL软件为探头定位和五个标准超声心动图图像质量评估提供实时指导。主要观察指标是综合五视图有限超声心动图(胸骨旁长轴、胸骨旁短轴、根尖4室、肋下和下腔静脉视图)的总采集时间。在随机分组和两周后使用标准化患者进行评估。次要结果包括使用经过验证的评估工具的图像质量和参与者对技术的态度。基线扫描时间和图像质量评分在两组之间具有可比性。在两周的随访中,使用配备dl设备的参与者显示出明显更快的总扫描时间(152秒[IQR 115-195]对266秒[IQR 206-324]; P D = 1.7)和更高的改进RACE分数(15 [IQR 10-18]对11 [IQR 7-13.5]; P = 0.034; Cohen's D = 0.84)。性能改进在技术上具有挑战性的视图中最为明显。两组报告对dl功能的信任程度相似。结合深度学习算法的超声设备显著提高了新手全面超声心动图检查的采集速度和图像质量。这些发现表明dl增强超声可能有助于解决诊断成像能力的关键空白,使非专业人员能够获得临床有用的心脏图像。
{"title":"Limited echocardiogram acquisition by novice clinicians aided with deep learning: A randomized controlled trial.","authors":"Andre Kumar, Evan Baum, Caitlin Parmer-Chow, John Kugler","doi":"10.1093/biomethods/bpaf083","DOIUrl":"10.1093/biomethods/bpaf083","url":null,"abstract":"<p><p>The global shortage of sonographers has created significant barriers to timely ultrasound diagnostics across medical specialties. Deep learning (DL) algorithms have potential to enhance image acquisition by clinicians without formal sonography training, potentially expanding access to crucial diagnostic imaging in resource-limited settings. This study evaluates whether DL-enabled devices improve acquisition of multi-view limited echocardiograms by healthcare providers without previous cardiac ultrasound training. In this single-center randomized controlled trial (2023-2024), internal medicine residents (<i>N</i> = 38) without prior sonography training received a portable ultrasound device with (<i>N</i> = 19) or without (<i>N</i> = 19) DL capability for a two-week clinical integration period during regular patient care on hospital wards. The DL software provided real-time guidance for probe positioning and image quality assessment across five standard echocardiographic views. The primary outcome was total acquisition time for a comprehensive five-view limited echocardiogram (parasternal long axis, parasternal short axis, apical 4-chamber, subcostal, and inferior vena cava views). Assessments occurred at randomization and after two weeks using a standardized patient. Secondary outcomes included image quality using a validated assessment tool and participant attitudes toward the technology. Baseline scan times and image quality scores were comparable between groups. At two-week follow-up, participants using DL-equipped devices demonstrated significantly faster total scan times (152 s [IQR 115-195] versus 266 s [IQR 206-324]; <i>P</i> < 0.001; Cohen's <i>D</i> = 1.7) and superior image quality with higher modified RACE scores (15 [IQR 10-18] versus 11 [IQR 7-13.5]; <i>P</i> = 0.034; Cohen's <i>D</i> = 0.84). Performance improvements were most pronounced in technically challenging views. Both groups reported similar levels of trust in DL-functionality. Ultrasound devices incorporating deep learning algorithms significantly improve both acquisition speed and image quality of comprehensive echocardiographic examinations by novice users. These findings suggest DL-enhanced ultrasound may help address critical gaps in diagnostic imaging capacity by enabling non-specialists to acquire clinically useful cardiac images.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf083"},"PeriodicalIF":1.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12627401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf082
Katharina Schiller, Anja Meierhenrich, Sanja Zenker, Lennart M Sielmann, Bianca Laker, Andrea Bräutigam
DAP-seq is an in vitro method to analyze the relative binding affinity of transcription factors to DNA. It is a fast and scalable method and its application to plant transcription factors with a binary bound/not bound readout was first published in 2016 by O'Malley and colleagues. Since DAP-seq only requires the transcription factor protein and genomic DNA of a species, it can easily be applied to any species with DNA extraction protocols and available genome sequence resources. We present an optimized DNA Affinity Purification sequencing (DAP-seq) protocol for the relative quantification of protein-DNA interactions and a practical guide for data analysis. The desired transcription factor is expressed in vitro and fused to a tag, such as a HaloTag. Genomic DNA is fragmented and adapters are ligated, added to the purified TF::HaloTag protein, and unspecifically bound DNA is washed away. After the bound DNA is recovered, we add a quantification step which homogenizes library size and improves reproducibility. The expanded downstream bioinformatic analysis identifies transcription factor binding sites in the genome followed by analyses of replicate robustness by comparing three different peak height measures, control characteristics, and relative binding affinity.
{"title":"Optimized DNA affinity purification sequencing determines relative binding affinity of transcription factors.","authors":"Katharina Schiller, Anja Meierhenrich, Sanja Zenker, Lennart M Sielmann, Bianca Laker, Andrea Bräutigam","doi":"10.1093/biomethods/bpaf082","DOIUrl":"10.1093/biomethods/bpaf082","url":null,"abstract":"<p><p>DAP-seq is an <i>in vitro</i> method to analyze the relative binding affinity of transcription factors to DNA. It is a fast and scalable method and its application to plant transcription factors with a binary bound/not bound readout was first published in 2016 by O'Malley and colleagues. Since DAP-seq only requires the transcription factor protein and genomic DNA of a species, it can easily be applied to any species with DNA extraction protocols and available genome sequence resources. We present an optimized DNA Affinity Purification sequencing (DAP-seq) protocol for the relative quantification of protein-DNA interactions and a practical guide for data analysis. The desired transcription factor is expressed <i>in vitro</i> and fused to a tag, such as a HaloTag. Genomic DNA is fragmented and adapters are ligated, added to the purified TF::HaloTag protein, and unspecifically bound DNA is washed away. After the bound DNA is recovered, we add a quantification step which homogenizes library size and improves reproducibility. The expanded downstream bioinformatic analysis identifies transcription factor binding sites in the genome followed by analyses of replicate robustness by comparing three different peak height measures, control characteristics, and relative binding affinity.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf082"},"PeriodicalIF":1.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12677946/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf071
Zeynep Deniz Şahin İnan, Rasim Hamutoğlu, Serpil Ünver Saraydın
Histological embedding and staining techniques are essential for examining tissue and cellular morphology. This study compares two embedding methods-JB-4™, a glycol methacrylate-based resin, and conventional paraffin-to determine which method provides superior visualization of liver and long bone tissues under light microscopy. Liver tissues from both embedding protocols were stained using the Periodic Acid-Schiff method and silver impregnation method. JB-4 sections were also stained with acid fuchsin and toluidine blue, while paraffin sections were stained with hematoxylin and eosin staining. Contrary to the common assumption that JB-4 may interferes with certain staining protocols, acid fuchsin and toluidine blue yielded high-contrast, structurally detailed results in JB-4 sections. Both techniques preserved liver morphology. However, JB-4 demonstrated higher resolution and enhanced visualization of intracellular structures. JB4 also preservedglycogen more effectively. Cellular structures including nuclei, nucleoli, bile duct epithelial cells, and Kupffer cells, were observedmore distinctly in JB-4 preparations. Reticular fibers were similarly visualized with both embedding techniques. In contrast, paraffin embedding provided better preserved overall tissue architecture. Whilelong bone specimens, paraffin sections frequently displayed poorly defined structures, while JB-4 offered clearer visualization of chondrocyte lacunae, osteocyte nuclei, lamellar bone, and bone marrow cells. JB-4 and paraffin each offer distinct advantages depending on tissue type and histological objective. JB-4 appears to be compatible with a broader range of stains than was previously reported, which expands its utility in detailed tissue analysis. The selection of an embedding method should align with the morphological characteristics of the target tissue and the specific research goals.
{"title":"Comparative analysis of paraffin and JB-4 embedding techniques in light microscopy.","authors":"Zeynep Deniz Şahin İnan, Rasim Hamutoğlu, Serpil Ünver Saraydın","doi":"10.1093/biomethods/bpaf071","DOIUrl":"10.1093/biomethods/bpaf071","url":null,"abstract":"<p><p>Histological embedding and staining techniques are essential for examining tissue and cellular morphology. This study compares two embedding methods-JB-4™, a glycol methacrylate-based resin, and conventional paraffin-to determine which method provides superior visualization of liver and long bone tissues under light microscopy. Liver tissues from both embedding protocols were stained using the Periodic Acid-Schiff method and silver impregnation method. JB-4 sections were also stained with acid fuchsin and toluidine blue, while paraffin sections were stained with hematoxylin and eosin staining. Contrary to the common assumption that JB-4 may interferes with certain staining protocols, acid fuchsin and toluidine blue yielded high-contrast, structurally detailed results in JB-4 sections. Both techniques preserved liver morphology. However, JB-4 demonstrated higher resolution and enhanced visualization of intracellular structures. JB4 also preservedglycogen more effectively. Cellular structures including nuclei, nucleoli, bile duct epithelial cells, and Kupffer cells, were observedmore distinctly in JB-4 preparations. Reticular fibers were similarly visualized with both embedding techniques. In contrast, paraffin embedding provided better preserved overall tissue architecture. Whilelong bone specimens, paraffin sections frequently displayed poorly defined structures, while JB-4 offered clearer visualization of chondrocyte lacunae, osteocyte nuclei, lamellar bone, and bone marrow cells. JB-4 and paraffin each offer distinct advantages depending on tissue type and histological objective. JB-4 appears to be compatible with a broader range of stains than was previously reported, which expands its utility in detailed tissue analysis. The selection of an embedding method should align with the morphological characteristics of the target tissue and the specific research goals.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf071"},"PeriodicalIF":1.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12598145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145496196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04eCollection Date: 2025-01-01DOI: 10.1093/biomethods/bpaf079
Natalya B Zakharzhevskaya, Dmitry A Kardonsky, Elizaveta A Vorobyeva, Olga Y Shagaleeva, Artemiy S Silantiev, Victoriia D Kazakova, Daria A Kashatnikova, Tatiana N Kalachnuk, Irina V Kolesnikova, Andrey V Chaplin, Anna A Vanyushkina, Boris A Efimov
Background: Headspace gas chromatography-mass spectrometry (HS GC-MS) traditionally has been applied to analyze samples with a high content of volatile components, such as stool samples. Nevertheless, other types of samples-for example, urine-may also contain volatile compounds and serve as valuable sources of diagnostic information. However, the content of volatile components in urine is considerably lower than in stool samples, necessitating modification of the HS GC/MS method. Such optimization could be particularly valuable for patients with inflammatory bowel disease (IBD), for whom providing a stool sample can sometimes be challenging. The aim of this work was to optimize a method for assessing volatile components in urine samples.
Methods: Urine samples were collected from 10 patients with IBD and 10 healthy controls. Laboratory, endoscopic, and histopathological analyses confirmed the IBD diagnosis. Metabolomic profiling was performed using HS GC/MS (Shimadzu QP2010 Ultra with HS-20 extractor).
Results: Volatile metabolites in urine samples suitable for analysis were acquired through optimized sample preparation procedures, including sampling vapor with a salt mixture, increasing the sample volume, adjusting the temperature regime during preparation, and fine-tuning the delay time prior to mass spectrometer activation. The most comprehensive and high-quality results were obtained using a triple extraction method with cryo-trap technology. As a result of HS GC/MS method optimization, urine metabolome analysis of IBD patients enabled the identification of biomarkers that can be utilized for the clinical detection of IBD. 2-Heptanone and pentadecane were identified as IBD-associated biomarkers.
Conclusions: Optimized preparation protocols enable HS GC/MS method to be effectively applied for the analysis of volatile components in urine samples. The modified HS GC/MS method can be scaled up for large-sample analysis to both detect identified metabolites and explore potential new biomarkers associated with IBD and other pathologies.
{"title":"Optimization of the HS-GC/MS technique for urine metabolomic profiling.","authors":"Natalya B Zakharzhevskaya, Dmitry A Kardonsky, Elizaveta A Vorobyeva, Olga Y Shagaleeva, Artemiy S Silantiev, Victoriia D Kazakova, Daria A Kashatnikova, Tatiana N Kalachnuk, Irina V Kolesnikova, Andrey V Chaplin, Anna A Vanyushkina, Boris A Efimov","doi":"10.1093/biomethods/bpaf079","DOIUrl":"10.1093/biomethods/bpaf079","url":null,"abstract":"<p><strong>Background: </strong>Headspace gas chromatography-mass spectrometry (HS GC-MS) traditionally has been applied to analyze samples with a high content of volatile components, such as stool samples. Nevertheless, other types of samples-for example, urine-may also contain volatile compounds and serve as valuable sources of diagnostic information. However, the content of volatile components in urine is considerably lower than in stool samples, necessitating modification of the HS GC/MS method. Such optimization could be particularly valuable for patients with inflammatory bowel disease (IBD), for whom providing a stool sample can sometimes be challenging. The aim of this work was to optimize a method for assessing volatile components in urine samples.</p><p><strong>Methods: </strong>Urine samples were collected from 10 patients with IBD and 10 healthy controls. Laboratory, endoscopic, and histopathological analyses confirmed the IBD diagnosis. Metabolomic profiling was performed using HS GC/MS (Shimadzu QP2010 Ultra with HS-20 extractor).</p><p><strong>Results: </strong>Volatile metabolites in urine samples suitable for analysis were acquired through optimized sample preparation procedures, including sampling vapor with a salt mixture, increasing the sample volume, adjusting the temperature regime during preparation, and fine-tuning the delay time prior to mass spectrometer activation. The most comprehensive and high-quality results were obtained using a triple extraction method with cryo-trap technology. As a result of HS GC/MS method optimization, urine metabolome analysis of IBD patients enabled the identification of biomarkers that can be utilized for the clinical detection of IBD. 2-Heptanone and pentadecane were identified as IBD-associated biomarkers.</p><p><strong>Conclusions: </strong>Optimized preparation protocols enable HS GC/MS method to be effectively applied for the analysis of volatile components in urine samples. The modified HS GC/MS method can be scaled up for large-sample analysis to both detect identified metabolites and explore potential new biomarkers associated with IBD and other pathologies.</p>","PeriodicalId":36528,"journal":{"name":"Biology Methods and Protocols","volume":"10 1","pages":"bpaf079"},"PeriodicalIF":1.3,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12631782/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}