Pub Date : 2024-12-20eCollection Date: 2024-01-01DOI: 10.1017/S2633903X24000151
Yu Zhou, Jan Sollmann, Jianxu Chen
With the fast development of modern microscopes and bioimaging techniques, an unprecedentedly large amount of imaging data is being generated, stored, analyzed, and shared through networks. The size of the data poses great challenges for current data infrastructure. One common way to reduce the data size is by image compression. This study analyzes multiple classic and deep-learning-based image compression methods, as well as an empirical study on their impact on downstream deep-learning-based image processing models. We used deep-learning-based label-free prediction models (i.e., predicting fluorescent images from bright-field images) as an example downstream task for the comparison and analysis of the impact of image compression. Different compression techniques are compared in compression ratio, image similarity, and, most importantly, the prediction accuracy of label-free models on original and compressed images. We found that artificial intelligence (AI)-based compression techniques largely outperform the classic ones with minimal influence on the downstream 2D label-free tasks. In the end, we hope this study could shed light on the potential of deep-learning-based image compression and raise the awareness of the potential impacts of image compression on downstream deep-learning models for analysis.
{"title":"Deep-learning-based image compression for microscopy images: An empirical study.","authors":"Yu Zhou, Jan Sollmann, Jianxu Chen","doi":"10.1017/S2633903X24000151","DOIUrl":"https://doi.org/10.1017/S2633903X24000151","url":null,"abstract":"<p><p>With the fast development of modern microscopes and bioimaging techniques, an unprecedentedly large amount of imaging data is being generated, stored, analyzed, and shared through networks. The size of the data poses great challenges for current data infrastructure. One common way to reduce the data size is by image compression. This study analyzes multiple classic and deep-learning-based image compression methods, as well as an empirical study on their impact on downstream deep-learning-based image processing models. We used deep-learning-based label-free prediction models (i.e., predicting fluorescent images from bright-field images) as an example downstream task for the comparison and analysis of the impact of image compression. Different compression techniques are compared in compression ratio, image similarity, and, most importantly, the prediction accuracy of label-free models on original and compressed images. We found that artificial intelligence (AI)-based compression techniques largely outperform the classic ones with minimal influence on the downstream 2D label-free tasks. In the end, we hope this study could shed light on the potential of deep-learning-based image compression and raise the awareness of the potential impacts of image compression on downstream deep-learning models for analysis.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e16"},"PeriodicalIF":0.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. In vivo OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while ex vivo confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired in vivo OCT to ex vivo confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.
{"title":"The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy.","authors":"Xin Tian, Nantheera Anantrasirichai, Lindsay Nicholson, Alin Achim","doi":"10.1017/S2633903X24000163","DOIUrl":"https://doi.org/10.1017/S2633903X24000163","url":null,"abstract":"<p><p>Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. <i>In vivo</i> OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while <i>ex vivo</i> confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired <i>in vivo</i> OCT to <i>ex vivo</i> confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e15"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704141/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22eCollection Date: 2024-01-01DOI: 10.1017/S2633903X24000114
Caterina Fuster-Barceló, Carlos García-López-de-Haro, Estibaliz Gómez-de-Mariscal, Wei Ouyang, Jean-Christophe Olivo-Marin, Daniel Sage, Arrate Muñoz-Barrutia
This manuscript showcases the latest advancements in deepImageJ, a pivotal Fiji/ImageJ plugin for bioimage analysis in life sciences. The plugin, known for its user-friendly interface, facilitates the application of diverse pre-trained convolutional neural networks to custom data. The manuscript demonstrates several deepImageJ capabilities, particularly in deploying complex pipelines, three-dimensional (3D) image analysis, and processing large images. A key development is the integration of the Java Deep Learning Library, expanding deepImageJ's compatibility with various deep learning (DL) frameworks, including TensorFlow, PyTorch, and ONNX. This allows for running multiple engines within a single Fiji/ImageJ instance, streamlining complex bioimage analysis workflows. The manuscript details three case studies to demonstrate these capabilities. The first case study explores integrated image-to-image translation followed by nuclei segmentation. The second case study focuses on 3D nuclei segmentation. The third case study showcases large image volume segmentation and compatibility with the BioImage Model Zoo. These use cases underscore deepImageJ's versatility and power to make advanced DLmore accessible and efficient for bioimage analysis. The new developments within deepImageJ seek to provide a more flexible and enriched user-friendly framework to enable next-generation image processing in life science.
{"title":"Bridging the gap: Integrating cutting-edge techniques into biological imaging with deepImageJ.","authors":"Caterina Fuster-Barceló, Carlos García-López-de-Haro, Estibaliz Gómez-de-Mariscal, Wei Ouyang, Jean-Christophe Olivo-Marin, Daniel Sage, Arrate Muñoz-Barrutia","doi":"10.1017/S2633903X24000114","DOIUrl":"https://doi.org/10.1017/S2633903X24000114","url":null,"abstract":"<p><p>This manuscript showcases the latest advancements in deepImageJ, a pivotal Fiji/ImageJ plugin for bioimage analysis in life sciences. The plugin, known for its user-friendly interface, facilitates the application of diverse pre-trained convolutional neural networks to custom data. The manuscript demonstrates several deepImageJ capabilities, particularly in deploying complex pipelines, three-dimensional (3D) image analysis, and processing large images. A key development is the integration of the Java Deep Learning Library, expanding deepImageJ's compatibility with various deep learning (DL) frameworks, including TensorFlow, PyTorch, and ONNX. This allows for running multiple engines within a single Fiji/ImageJ instance, streamlining complex bioimage analysis workflows. The manuscript details three case studies to demonstrate these capabilities. The first case study explores integrated image-to-image translation followed by nuclei segmentation. The second case study focuses on 3D nuclei segmentation. The third case study showcases large image volume segmentation and compatibility with the BioImage Model Zoo. These use cases underscore deepImageJ's versatility and power to make advanced DLmore accessible and efficient for bioimage analysis. The new developments within deepImageJ seek to provide a more flexible and enriched user-friendly framework to enable next-generation image processing in life science.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e14"},"PeriodicalIF":0.0,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15eCollection Date: 2024-01-01DOI: 10.1017/S2633903X24000096
Valentin Debarnot, Pierre Weiss
We propose a neural network architecture and a training procedure to estimate blurring operators and deblur images from a single degraded image. Our key assumption is that the forward operators can be parameterized by a low-dimensional vector. The models we consider include a description of the point spread function with Zernike polynomials in the pupil plane or product-convolution expansions, which incorporate space-varying operators. Numerical experiments show that the proposed method can accurately and robustly recover the blur parameters even for large noise levels. For a convolution model, the average signal-to-noise ratio of the recovered point spread function ranges from 13 dB in the noiseless regime to 8 dB in the high-noise regime. In comparison, the tested alternatives yield negative values. This operator estimate can then be used as an input for an unrolled neural network to deblur the image. Quantitative experiments on synthetic data demonstrate that this method outperforms other commonly used methods both perceptually and in terms of SSIM. The algorithm can process a 512 512 image under a second on a consumer graphics card and does not require any human interaction once the operator parameterization has been set up.1.
{"title":"Deep-blur: Blind identification and deblurring with convolutional neural networks.","authors":"Valentin Debarnot, Pierre Weiss","doi":"10.1017/S2633903X24000096","DOIUrl":"https://doi.org/10.1017/S2633903X24000096","url":null,"abstract":"<p><p>We propose a neural network architecture and a training procedure to estimate blurring operators and deblur images from a single degraded image. Our key assumption is that the forward operators can be parameterized by a low-dimensional vector. The models we consider include a description of the point spread function with Zernike polynomials in the pupil plane or product-convolution expansions, which incorporate space-varying operators. Numerical experiments show that the proposed method can accurately and robustly recover the blur parameters even for large noise levels. For a convolution model, the average signal-to-noise ratio of the recovered point spread function ranges from 13 dB in the noiseless regime to 8 dB in the high-noise regime. In comparison, the tested alternatives yield negative values. This operator estimate can then be used as an input for an unrolled neural network to deblur the image. Quantitative experiments on synthetic data demonstrate that this method outperforms other commonly used methods both perceptually and in terms of SSIM. The algorithm can process a 512 512 image under a second on a consumer graphics card and does not require any human interaction once the operator parameterization has been set up.<sup>1</sup>.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e13"},"PeriodicalIF":0.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-supervised representation learning (SSRL) in computer vision relies heavily on simple image transformations such as random rotation, crops, or illumination to learn meaningful and invariant features. Despite acknowledged importance, there is a lack of comprehensive exploration of the impact of transformation choice in the literature. Our study delves into this relationship, specifically focusing on microscopy imaging with subtle cell phenotype differences. We reveal that transformation design acts as a form of either unwanted or beneficial supervision, impacting feature clustering and representation relevance. Importantly, these effects vary based on class labels in a supervised dataset. In microscopy images, transformation design significantly influences the representation, introducing imperceptible yet strong biases. We demonstrate that strategic transformation selection, based on desired feature invariance, drastically improves classification performance and representation quality, even with limited training samples.
{"title":"Exploring self-supervised learning biases for microscopy image representation.","authors":"Ihab Bendidi, Adrien Bardes, Ethan Cohen, Alexis Lamiable, Guillaume Bollot, Auguste Genovesio","doi":"10.1017/S2633903X2400014X","DOIUrl":"https://doi.org/10.1017/S2633903X2400014X","url":null,"abstract":"<p><p>Self-supervised representation learning (SSRL) in computer vision relies heavily on simple image transformations such as random rotation, crops, or illumination to learn meaningful and invariant features. Despite acknowledged importance, there is a lack of comprehensive exploration of the impact of transformation choice in the literature. Our study delves into this relationship, specifically focusing on microscopy imaging with subtle cell phenotype differences. We reveal that transformation design acts as a form of either unwanted or beneficial supervision, impacting feature clustering and representation relevance. Importantly, these effects vary based on class labels in a supervised dataset. In microscopy images, transformation design significantly influences the representation, introducing imperceptible yet strong biases. We demonstrate that strategic transformation selection, based on desired feature invariance, drastically improves classification performance and representation quality, even with limited training samples.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e12"},"PeriodicalIF":0.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-11eCollection Date: 2024-01-01DOI: 10.1017/S2633903X24000102
G Nardi, L Torcq, A A Schmidt, J-C Olivo-Marin
We develop a novel method for image segmentation of 3D confocal microscopy images of emerging hematopoietic stem cells. The method is based on the theory of persistent homology and uses an optimal threshold to select the most persistent cycles in the persistence diagram. This enables the segmentation of the image's most contrasted and representative shapes. Coupling this segmentation method with a meshing algorithm, we define a pipeline for 3D reconstruction of confocal volumes. Compared to related methods, this approach improves shape segmentation, is more ergonomic to automatize, and has fewer parameters. We apply it to the segmentation of membranes, at subcellular resolution, of cells involved in the endothelial-to-hematopoietic transition (EHT) in the zebrafish embryos.
{"title":"Topology-based segmentation of 3D confocal images of emerging hematopoietic stem cells in the zebrafish embryo.","authors":"G Nardi, L Torcq, A A Schmidt, J-C Olivo-Marin","doi":"10.1017/S2633903X24000102","DOIUrl":"https://doi.org/10.1017/S2633903X24000102","url":null,"abstract":"<p><p>We develop a novel method for image segmentation of 3D confocal microscopy images of emerging hematopoietic stem cells. The method is based on the theory of persistent homology and uses an optimal threshold to select the most persistent cycles in the persistence diagram. This enables the segmentation of the image's most contrasted and representative shapes. Coupling this segmentation method with a meshing algorithm, we define a pipeline for 3D reconstruction of confocal volumes. Compared to related methods, this approach improves shape segmentation, is more ergonomic to automatize, and has fewer parameters. We apply it to the segmentation of membranes, at subcellular resolution, of cells involved in the endothelial-to-hematopoietic transition (EHT) in the zebrafish embryos.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e11"},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23eCollection Date: 2024-01-01DOI: 10.1017/S2633903X24000138
Maddalena M Bolognesi, Lorenzo Dall'Olio, Amy Maerten, Simone Borghesi, Gastone Castellani, Giorgio Cattoretti
Hyperplexed in-situ targeted proteomics via antibody immunodetection (i.e., >15 markers) is changing how we classify cells and tissues. Differently from other high-dimensional single-cell assays (flow cytometry, single-cell RNA sequencing), the human eye is a necessary component in multiple procedural steps: image segmentation, signal thresholding, antibody validation, and iconographic rendering. Established methods complement the human image evaluation, but may carry undisclosed biases in such a new context, therefore we re-evaluate all the steps in hyperplexed proteomics. We found that the human eye can discriminate less than 64 out of 256 gray levels and has limitations in discriminating luminance levels in conventional histology images. Furthermore, only images containing visible signals are selected and eye-guided digital thresholding separates signal from noise. BRAQUE, a hyperplexed proteomic tool, can extract, in a marker-agnostic fashion, granular information from markers which have a very low signal-to-noise ratio and therefore are not visualized by traditional visual rendering. By analyzing a public human lymph node dataset, we also found unpredicted staining results by validated antibodies, which highlight the need to upgrade the definition of antibody specificity in hyperplexed immunostaining. Spatially hyperplexed methods upgrade and supplant traditional image-based analysis of tissue immunostaining, beyond the human eye contribution.
{"title":"Seeing or believing in hyperplexed spatial proteomics via antibodies: New and old biases for an image-based technology.","authors":"Maddalena M Bolognesi, Lorenzo Dall'Olio, Amy Maerten, Simone Borghesi, Gastone Castellani, Giorgio Cattoretti","doi":"10.1017/S2633903X24000138","DOIUrl":"10.1017/S2633903X24000138","url":null,"abstract":"<p><p>Hyperplexed in-situ targeted proteomics via antibody immunodetection (i.e., >15 markers) is changing how we classify cells and tissues. Differently from other high-dimensional single-cell assays (flow cytometry, single-cell RNA sequencing), the human eye is a necessary component in multiple procedural steps: image segmentation, signal thresholding, antibody validation, and iconographic rendering. Established methods complement the human image evaluation, but may carry undisclosed biases in such a new context, therefore we re-evaluate all the steps in hyperplexed proteomics. We found that the human eye can discriminate less than 64 out of 256 gray levels and has limitations in discriminating luminance levels in conventional histology images. Furthermore, only images containing visible signals are selected and eye-guided digital thresholding separates signal from noise. BRAQUE, a hyperplexed proteomic tool, can extract, in a marker-agnostic fashion, granular information from markers which have a very low signal-to-noise ratio and therefore are not visualized by traditional visual rendering. By analyzing a public human lymph node dataset, we also found unpredicted staining results by validated antibodies, which highlight the need to upgrade the definition of antibody specificity in hyperplexed immunostaining. Spatially hyperplexed methods upgrade and supplant traditional image-based analysis of tissue immunostaining, beyond the human eye contribution.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e10"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30eCollection Date: 2024-01-01DOI: 10.1017/S2633903X24000084
Marzieh Gheisari, Auguste Genovesio
Supervised deep learning approaches can artificially increase the resolution of microscopy images by learning a mapping between two image resolutions or modalities. However, such methods often require a large set of hard-to-get low-res/high-res image pairs and produce synthetic images with a moderate increase in resolution. Conversely, recent methods based on generative adversarial network (GAN) latent search offered a drastic increase in resolution without the need of paired images. However, they offer limited reconstruction of the high-resolution (HR) image interpretable features. Here, we propose a robust super-resolution (SR) method based on regularized latent search (RLS) that offers an actionable balance between fidelity to the ground truth (GT) and realism of the recovered image given a distribution prior. The latter allows to split the analysis of a low-resolution (LR) image into a computational SR task performed by deep learning followed by a quantification task performed by a handcrafted algorithm based on interpretable biological features. This two-step process holds potential for various applications such as diagnostics on mobile devices, where the main aim is not to recover the HR details of a specific sample but rather to obtain HR images that preserve explainable and quantifiable differences between conditions.
有监督的深度学习方法可以通过学习两种图像分辨率或模式之间的映射,人为提高显微图像的分辨率。然而,这类方法通常需要大量难以获得的低分辨率/高分辨率图像对,生成的合成图像分辨率也只能适度提高。相反,最近基于生成式对抗网络(GAN)潜搜索的方法无需配对图像即可大幅提高分辨率。然而,这些方法对高分辨率(HR)图像可解释特征的重建有限。在此,我们提出了一种基于正则化潜在搜索(RLS)的鲁棒性超分辨率(SR)方法,该方法在忠实于地面实况(GT)和给定分布先验的恢复图像逼真度之间实现了可操作的平衡。后者可以将低分辨率(LR)图像的分析拆分为由深度学习执行的计算 SR 任务和由基于可解释生物特征的手工算法执行的量化任务。这种两步法可用于各种应用,如移动设备诊断,其主要目的不是恢复特定样本的 HR 细节,而是获取 HR 图像,以保留不同条件下可解释和可量化的差异。
{"title":"Reconstructing interpretable features in computational super-resolution microscopy via regularized latent search.","authors":"Marzieh Gheisari, Auguste Genovesio","doi":"10.1017/S2633903X24000084","DOIUrl":"10.1017/S2633903X24000084","url":null,"abstract":"<p><p>Supervised deep learning approaches can artificially increase the resolution of microscopy images by learning a mapping between two image resolutions or modalities. However, such methods often require a large set of hard-to-get low-res/high-res image pairs and produce synthetic images with a moderate increase in resolution. Conversely, recent methods based on generative adversarial network (GAN) latent search offered a drastic increase in resolution without the need of paired images. However, they offer limited reconstruction of the high-resolution (HR) image interpretable features. Here, we propose a robust super-resolution (SR) method based on regularized latent search (RLS) that offers an actionable balance between fidelity to the ground truth (GT) and realism of the recovered image given a distribution prior. The latter allows to split the analysis of a low-resolution (LR) image into a computational SR task performed by deep learning followed by a quantification task performed by a handcrafted algorithm based on interpretable biological features. This two-step process holds potential for various applications such as diagnostics on mobile devices, where the main aim is not to recover the HR details of a specific sample but rather to obtain HR images that preserve explainable and quantifiable differences between conditions.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e8"},"PeriodicalIF":0.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418082/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-16DOI: 10.1017/s2633903x24000072
Tommi Muller, Adriana L. Duncan, Eric J. Verbeke, Joe Kileel
{"title":"Algebraic Constraints and Algorithms for Common Lines in Cryo-EM","authors":"Tommi Muller, Adriana L. Duncan, Eric J. Verbeke, Joe Kileel","doi":"10.1017/s2633903x24000072","DOIUrl":"https://doi.org/10.1017/s2633903x24000072","url":null,"abstract":"","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"21 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140967427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09eCollection Date: 2024-01-01DOI: 10.1017/S2633903X24000060
Hui Wang, Shiqing Liao, Xinye Yu, Jiayan Zhang, Z Hong Zhou
Cryogenic electron tomography (cryoET) is capable of determining in situ biological structures of molecular complexes at near-atomic resolution by averaging half a million subtomograms. While abundant complexes/particles are often clustered in arrays, precisely locating and seamlessly averaging such particles across many tomograms present major challenges. Here, we developed TomoNet, a software package with a modern graphical user interface to carry out the entire pipeline of cryoET and subtomogram averaging to achieve high resolution. TomoNet features built-in automatic particle picking and three-dimensional (3D) classification functions and integrates commonly used packages to streamline high-resolution subtomogram averaging for structures in 1D, 2D, or 3D arrays. Automatic particle picking is accomplished in two complementary ways: one based on template matching and the other using deep learning. TomoNet's hierarchical file organization and visual display facilitate efficient data management as required for large cryoET datasets. Applications of TomoNet to three types of datasets demonstrate its capability of efficient and accurate particle picking on flexible and imperfect lattices to obtain high-resolution 3D biological structures: virus-like particles, bacterial surface layers within cellular lamellae, and membranes decorated with nuclear egress protein complexes. These results demonstrate TomoNet's potential for broad applications to various cryoET projects targeting high-resolution in situ structures.
{"title":"TomoNet: A streamlined cryogenic electron tomography software pipeline with automatic particle picking on flexible lattices.","authors":"Hui Wang, Shiqing Liao, Xinye Yu, Jiayan Zhang, Z Hong Zhou","doi":"10.1017/S2633903X24000060","DOIUrl":"10.1017/S2633903X24000060","url":null,"abstract":"<p><p>Cryogenic electron tomography (cryoET) is capable of determining <i>in situ</i> biological structures of molecular complexes at near-atomic resolution by averaging half a million subtomograms. While abundant complexes/particles are often clustered in arrays, precisely locating and seamlessly averaging such particles across many tomograms present major challenges. Here, we developed TomoNet, a software package with a modern graphical user interface to carry out the entire pipeline of cryoET and subtomogram averaging to achieve high resolution. TomoNet features built-in automatic particle picking and three-dimensional (3D) classification functions and integrates commonly used packages to streamline high-resolution subtomogram averaging for structures in 1D, 2D, or 3D arrays. Automatic particle picking is accomplished in two complementary ways: one based on template matching and the other using deep learning. TomoNet's hierarchical file organization and visual display facilitate efficient data management as required for large cryoET datasets. Applications of TomoNet to three types of datasets demonstrate its capability of efficient and accurate particle picking on flexible and imperfect lattices to obtain high-resolution 3D biological structures: virus-like particles, bacterial surface layers within cellular lamellae, and membranes decorated with nuclear egress protein complexes. These results demonstrate TomoNet's potential for broad applications to various cryoET projects targeting high-resolution <i>in situ</i> structures.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e7"},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140495/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}