Pub Date : 2025-02-12DOI: 10.1038/s41592-025-02598-2
Stephan Daetwyler, Hanieh Mazloom-Farsibaf, Felix Y. Zhou, Dagan Segal, Etai Sapoznik, Bingying Chen, Jill M. Westcott, Rolf A. Brekken, Gaudenz Danuser, Reto Fiolka
Most biological processes, from development to pathogenesis, span multiple time and length scales. While light-sheet fluorescence microscopy has become a fast and efficient method for imaging organisms, cells and subcellular dynamics, simultaneous observations across all these scales have remained challenging. Moreover, continuous high-resolution imaging inside living organisms has mostly been limited to a few hours, as regions of interest quickly move out of view due to sample movement and growth. Here, we present a self-driving, multiresolution light-sheet microscope platform controlled by custom Python-based software, to simultaneously observe and quantify subcellular dynamics in the context of entire organisms in vitro and in vivo over hours of imaging. We apply the platform to the study of developmental processes, cancer invasion and metastasis, and we provide quantitative multiscale analysis of immune–cancer cell interactions in zebrafish xenografts. A self-driving multiresolution light-sheet microscope enables the simultaneous observation and quantification of cellular and subcellular dynamics in the context of intact and developing organisms over many hours of imaging.
{"title":"Imaging of cellular dynamics from a whole organism to subcellular scale with self-driving, multiscale microscopy","authors":"Stephan Daetwyler, Hanieh Mazloom-Farsibaf, Felix Y. Zhou, Dagan Segal, Etai Sapoznik, Bingying Chen, Jill M. Westcott, Rolf A. Brekken, Gaudenz Danuser, Reto Fiolka","doi":"10.1038/s41592-025-02598-2","DOIUrl":"10.1038/s41592-025-02598-2","url":null,"abstract":"Most biological processes, from development to pathogenesis, span multiple time and length scales. While light-sheet fluorescence microscopy has become a fast and efficient method for imaging organisms, cells and subcellular dynamics, simultaneous observations across all these scales have remained challenging. Moreover, continuous high-resolution imaging inside living organisms has mostly been limited to a few hours, as regions of interest quickly move out of view due to sample movement and growth. Here, we present a self-driving, multiresolution light-sheet microscope platform controlled by custom Python-based software, to simultaneously observe and quantify subcellular dynamics in the context of entire organisms in vitro and in vivo over hours of imaging. We apply the platform to the study of developmental processes, cancer invasion and metastasis, and we provide quantitative multiscale analysis of immune–cancer cell interactions in zebrafish xenografts. A self-driving multiresolution light-sheet microscope enables the simultaneous observation and quantification of cellular and subcellular dynamics in the context of intact and developing organisms over many hours of imaging.","PeriodicalId":18981,"journal":{"name":"Nature Methods","volume":"22 3","pages":"569-578"},"PeriodicalIF":36.1,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143409303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41592-025-02595-5
Carsen Stringer, Marius Pachitariu
Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as ‘one-click’ buttons inside the graphical interface of Cellpose as well as in the Cellpose API. Cellpose3 employs deep-learning-based approaches for image restoration to improve cellular segmentation and shows strong generalized performance even on images degraded by noise, blurring or undersampling.
{"title":"Cellpose3: one-click image restoration for improved cellular segmentation","authors":"Carsen Stringer, Marius Pachitariu","doi":"10.1038/s41592-025-02595-5","DOIUrl":"10.1038/s41592-025-02595-5","url":null,"abstract":"Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as ‘one-click’ buttons inside the graphical interface of Cellpose as well as in the Cellpose API. Cellpose3 employs deep-learning-based approaches for image restoration to improve cellular segmentation and shows strong generalized performance even on images degraded by noise, blurring or undersampling.","PeriodicalId":18981,"journal":{"name":"Nature Methods","volume":"22 3","pages":"592-599"},"PeriodicalIF":36.1,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41592-025-02595-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143409299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1038/s41592-024-02579-x
Timothy R. Olsen, Pranay Talla, Romella K. Sagatelian, Julia Furnari, Jeffrey N. Bruce, Peter Canoll, Shan Zha, Peter A. Sims
The ideal technology for directly investigating the relationship between genotype and phenotype would analyze both RNA and DNA genome-wide and with single-cell resolution; however, existing tools lack the throughput required for comprehensive analysis of complex tumors and tissues. We introduce a highly scalable method for jointly profiling DNA and expression following nucleosome depletion (DEFND-seq). In DEFND-seq, nuclei are nucleosome-depleted, tagmented and separated into individual droplets for messenger RNA and genomic DNA barcoding. Once nuclei have been depleted of nucleosomes, subsequent steps can be performed using the widely available 10x Genomics droplet microfluidic technology and commercial kits. We demonstrate the production of high-complexity mRNA and gDNA sequencing libraries from thousands of individual nuclei from cell lines, fresh and archived surgical specimens for associating gene expression with both copy number and single-nucleotide variants. Building on a nucleosome-depletion strategy, DEFND-seq utilizes a droplet microfluidic platform to enable high-throughput co-profiling of DNA and RNA in single cells.
{"title":"Scalable co-sequencing of RNA and DNA from individual nuclei","authors":"Timothy R. Olsen, Pranay Talla, Romella K. Sagatelian, Julia Furnari, Jeffrey N. Bruce, Peter Canoll, Shan Zha, Peter A. Sims","doi":"10.1038/s41592-024-02579-x","DOIUrl":"10.1038/s41592-024-02579-x","url":null,"abstract":"The ideal technology for directly investigating the relationship between genotype and phenotype would analyze both RNA and DNA genome-wide and with single-cell resolution; however, existing tools lack the throughput required for comprehensive analysis of complex tumors and tissues. We introduce a highly scalable method for jointly profiling DNA and expression following nucleosome depletion (DEFND-seq). In DEFND-seq, nuclei are nucleosome-depleted, tagmented and separated into individual droplets for messenger RNA and genomic DNA barcoding. Once nuclei have been depleted of nucleosomes, subsequent steps can be performed using the widely available 10x Genomics droplet microfluidic technology and commercial kits. We demonstrate the production of high-complexity mRNA and gDNA sequencing libraries from thousands of individual nuclei from cell lines, fresh and archived surgical specimens for associating gene expression with both copy number and single-nucleotide variants. Building on a nucleosome-depletion strategy, DEFND-seq utilizes a droplet microfluidic platform to enable high-throughput co-profiling of DNA and RNA in single cells.","PeriodicalId":18981,"journal":{"name":"Nature Methods","volume":"22 3","pages":"477-487"},"PeriodicalIF":36.1,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143409306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-11DOI: 10.1038/s41592-025-02615-4
Proper methods reporting is crucial for transparency, but ensuring method reusability by other labs takes a bit of extra effort. Here we discuss best practices for reporting methods so that they can be reused.
{"title":"Reporting methods for reusability","authors":"","doi":"10.1038/s41592-025-02615-4","DOIUrl":"10.1038/s41592-025-02615-4","url":null,"abstract":"Proper methods reporting is crucial for transparency, but ensuring method reusability by other labs takes a bit of extra effort. Here we discuss best practices for reporting methods so that they can be reused.","PeriodicalId":18981,"journal":{"name":"Nature Methods","volume":"22 2","pages":"217-217"},"PeriodicalIF":36.1,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41592-025-02615-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143389462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-11DOI: 10.1038/s41592-025-02605-6
Lei Tang
{"title":"Tackling in vivo screening complexity","authors":"Lei Tang","doi":"10.1038/s41592-025-02605-6","DOIUrl":"10.1038/s41592-025-02605-6","url":null,"abstract":"","PeriodicalId":18981,"journal":{"name":"Nature Methods","volume":"22 2","pages":"224-224"},"PeriodicalIF":36.1,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143389463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-11DOI: 10.1038/s41592-025-02604-7
Nina Vogt
{"title":"A DREADD for the periphery","authors":"Nina Vogt","doi":"10.1038/s41592-025-02604-7","DOIUrl":"10.1038/s41592-025-02604-7","url":null,"abstract":"","PeriodicalId":18981,"journal":{"name":"Nature Methods","volume":"22 2","pages":"224-224"},"PeriodicalIF":36.1,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143389479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-06DOI: 10.1038/s41592-025-02599-1
Jianxun Ren, Ning An, Cong Lin, Youjia Zhang, Zhenyu Sun, Wei Zhang, Shiyi Li, Ning Guo, Weigang Cui, Qingyu Hu, Weiwei Wang, Xuehai Wu, Yinyan Wang, Tao Jiang, Theodore D. Satterthwaite, Danhong Wang, Hesheng Liu
Neuroimaging has entered the era of big data. However, the advancement of preprocessing pipelines falls behind the rapid expansion of data volume, causing substantial computational challenges. Here we present DeepPrep, a pipeline empowered by deep learning and a workflow manager. Evaluated on over 55,000 scans, DeepPrep demonstrates tenfold acceleration, scalability and robustness compared to the state-of-the-art pipeline, thereby meeting the scalability requirements of neuroimaging. DeepPrep is a preprocessing pipeline for functional and structural MRI data from humans. Deep learning-based modules and an efficient workflow allow DeepPrep to handle large datasets.
{"title":"DeepPrep: an accelerated, scalable and robust pipeline for neuroimaging preprocessing empowered by deep learning","authors":"Jianxun Ren, Ning An, Cong Lin, Youjia Zhang, Zhenyu Sun, Wei Zhang, Shiyi Li, Ning Guo, Weigang Cui, Qingyu Hu, Weiwei Wang, Xuehai Wu, Yinyan Wang, Tao Jiang, Theodore D. Satterthwaite, Danhong Wang, Hesheng Liu","doi":"10.1038/s41592-025-02599-1","DOIUrl":"10.1038/s41592-025-02599-1","url":null,"abstract":"Neuroimaging has entered the era of big data. However, the advancement of preprocessing pipelines falls behind the rapid expansion of data volume, causing substantial computational challenges. Here we present DeepPrep, a pipeline empowered by deep learning and a workflow manager. Evaluated on over 55,000 scans, DeepPrep demonstrates tenfold acceleration, scalability and robustness compared to the state-of-the-art pipeline, thereby meeting the scalability requirements of neuroimaging. DeepPrep is a preprocessing pipeline for functional and structural MRI data from humans. Deep learning-based modules and an efficient workflow allow DeepPrep to handle large datasets.","PeriodicalId":18981,"journal":{"name":"Nature Methods","volume":"22 3","pages":"473-476"},"PeriodicalIF":36.1,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41592-025-02599-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143365263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}