首页 > 最新文献

Biological imaging最新文献

英文 中文
Deep-learning-based image compression for microscopy images: An empirical study. 基于深度学习的显微图像压缩:实证研究。
Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000151
Yu Zhou, Jan Sollmann, Jianxu Chen

With the fast development of modern microscopes and bioimaging techniques, an unprecedentedly large amount of imaging data is being generated, stored, analyzed, and shared through networks. The size of the data poses great challenges for current data infrastructure. One common way to reduce the data size is by image compression. This study analyzes multiple classic and deep-learning-based image compression methods, as well as an empirical study on their impact on downstream deep-learning-based image processing models. We used deep-learning-based label-free prediction models (i.e., predicting fluorescent images from bright-field images) as an example downstream task for the comparison and analysis of the impact of image compression. Different compression techniques are compared in compression ratio, image similarity, and, most importantly, the prediction accuracy of label-free models on original and compressed images. We found that artificial intelligence (AI)-based compression techniques largely outperform the classic ones with minimal influence on the downstream 2D label-free tasks. In the end, we hope this study could shed light on the potential of deep-learning-based image compression and raise the awareness of the potential impacts of image compression on downstream deep-learning models for analysis.

随着现代显微镜和生物成像技术的快速发展,前所未有的大量成像数据正在通过网络生成、存储、分析和共享。数据的规模给当前的数据基础设施带来了巨大的挑战。减少数据大小的一种常用方法是图像压缩。本研究分析了多种经典和基于深度学习的图像压缩方法,并实证研究了它们对下游基于深度学习的图像处理模型的影响。我们使用基于深度学习的无标签预测模型(即从亮场图像中预测荧光图像)作为下游任务的示例,用于比较和分析图像压缩的影响。比较了不同的压缩技术在压缩比、图像相似度以及最重要的是无标签模型在原始图像和压缩图像上的预测精度。我们发现基于人工智能(AI)的压缩技术在很大程度上优于经典的压缩技术,对下游2D无标签任务的影响最小。最后,我们希望本研究能够揭示基于深度学习的图像压缩的潜力,并提高人们对图像压缩对下游深度学习模型进行分析的潜在影响的认识。
{"title":"Deep-learning-based image compression for microscopy images: An empirical study.","authors":"Yu Zhou, Jan Sollmann, Jianxu Chen","doi":"10.1017/S2633903X24000151","DOIUrl":"https://doi.org/10.1017/S2633903X24000151","url":null,"abstract":"<p><p>With the fast development of modern microscopes and bioimaging techniques, an unprecedentedly large amount of imaging data is being generated, stored, analyzed, and shared through networks. The size of the data poses great challenges for current data infrastructure. One common way to reduce the data size is by image compression. This study analyzes multiple classic and deep-learning-based image compression methods, as well as an empirical study on their impact on downstream deep-learning-based image processing models. We used deep-learning-based label-free prediction models (i.e., predicting fluorescent images from bright-field images) as an example downstream task for the comparison and analysis of the impact of image compression. Different compression techniques are compared in compression ratio, image similarity, and, most importantly, the prediction accuracy of label-free models on original and compressed images. We found that artificial intelligence (AI)-based compression techniques largely outperform the classic ones with minimal influence on the downstream 2D label-free tasks. In the end, we hope this study could shed light on the potential of deep-learning-based image compression and raise the awareness of the potential impacts of image compression on downstream deep-learning models for analysis.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e16"},"PeriodicalIF":0.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy. 视网膜疾病早期检测的探索:基于3D cyclegan的光学相干断层扫描转化为共聚焦显微镜。
Pub Date : 2024-12-16 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000163
Xin Tian, Nantheera Anantrasirichai, Lindsay Nicholson, Alin Achim

Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. In vivo OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while ex vivo confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired in vivo OCT to ex vivo confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.

光学相干断层扫描(OCT)和共聚焦显微镜是视网膜成像的关键,具有明显的优势和局限性。活体OCT提供快速、无创成像,但可能存在清晰度问题和运动伪影,而离体共聚焦显微镜提供高分辨率、细胞细节的彩色图像,具有侵入性,并引起伦理问题。为了弥合这两种模式的优势,我们提出了一个基于无监督3D CycleGAN的新框架,用于将未匹配的体内OCT转换为离体共聚焦显微镜图像。这标志着首次尝试利用OCT固有的3D信息,并将其转化为共聚焦显微镜丰富、详细的色域。我们还介绍了一个独特的数据集,OCT2Confocal,包括小鼠OCT和共聚焦视网膜图像,促进了跨模态图像翻译研究的发展和建立了一个基准。我们的模型已经进行了定量和定性评估,获得了0.766的fr起始距离(FID)分数和0.153的内核起始距离(KID)分数,以及领先的主观平均意见分数(MOS)。与现有方法相比,我们的模型在有限的数据下显示出更高的图像保真度和质量。我们的方法有效地从3D共聚焦图像中合成颜色信息,接近目标结果,并表明在眼科诊断和监测应用的潜力增强。
{"title":"The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy.","authors":"Xin Tian, Nantheera Anantrasirichai, Lindsay Nicholson, Alin Achim","doi":"10.1017/S2633903X24000163","DOIUrl":"https://doi.org/10.1017/S2633903X24000163","url":null,"abstract":"<p><p>Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. <i>In vivo</i> OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while <i>ex vivo</i> confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired <i>in vivo</i> OCT to <i>ex vivo</i> confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e15"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704141/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging the gap: Integrating cutting-edge techniques into biological imaging with deepImageJ. 弥合差距:与deepImageJ整合尖端技术到生物成像。
Pub Date : 2024-11-22 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000114
Caterina Fuster-Barceló, Carlos García-López-de-Haro, Estibaliz Gómez-de-Mariscal, Wei Ouyang, Jean-Christophe Olivo-Marin, Daniel Sage, Arrate Muñoz-Barrutia

This manuscript showcases the latest advancements in deepImageJ, a pivotal Fiji/ImageJ plugin for bioimage analysis in life sciences. The plugin, known for its user-friendly interface, facilitates the application of diverse pre-trained convolutional neural networks to custom data. The manuscript demonstrates several deepImageJ capabilities, particularly in deploying complex pipelines, three-dimensional (3D) image analysis, and processing large images. A key development is the integration of the Java Deep Learning Library, expanding deepImageJ's compatibility with various deep learning (DL) frameworks, including TensorFlow, PyTorch, and ONNX. This allows for running multiple engines within a single Fiji/ImageJ instance, streamlining complex bioimage analysis workflows. The manuscript details three case studies to demonstrate these capabilities. The first case study explores integrated image-to-image translation followed by nuclei segmentation. The second case study focuses on 3D nuclei segmentation. The third case study showcases large image volume segmentation and compatibility with the BioImage Model Zoo. These use cases underscore deepImageJ's versatility and power to make advanced DLmore accessible and efficient for bioimage analysis. The new developments within deepImageJ seek to provide a more flexible and enriched user-friendly framework to enable next-generation image processing in life science.

这份手稿展示了deepImageJ的最新进展,这是一个关键的斐济/ImageJ插件,用于生命科学中的生物图像分析。该插件以其用户友好的界面而闻名,有助于将各种预训练的卷积神经网络应用于自定义数据。该手稿演示了几种deepImageJ功能,特别是在部署复杂的管道,三维(3D)图像分析和处理大型图像方面。一个关键的发展是Java深度学习库的集成,扩展了deepImageJ与各种深度学习(DL)框架的兼容性,包括TensorFlow, PyTorch和ONNX。这允许在单个Fiji/ImageJ实例中运行多个引擎,简化复杂的生物图像分析工作流程。手稿详细介绍了三个案例研究来演示这些功能。第一个案例研究探讨了综合图像到图像的翻译,然后是核分割。第二个案例研究的重点是三维核分割。第三个案例研究展示了大图像体积分割和与生物图像模型动物园的兼容性。这些用例强调了deepImageJ的多功能性和强大功能,使高级dll更易于访问和高效地用于生物图像分析。deepImageJ的新发展旨在提供更灵活和丰富的用户友好框架,以实现生命科学中的下一代图像处理。
{"title":"Bridging the gap: Integrating cutting-edge techniques into biological imaging with deepImageJ.","authors":"Caterina Fuster-Barceló, Carlos García-López-de-Haro, Estibaliz Gómez-de-Mariscal, Wei Ouyang, Jean-Christophe Olivo-Marin, Daniel Sage, Arrate Muñoz-Barrutia","doi":"10.1017/S2633903X24000114","DOIUrl":"https://doi.org/10.1017/S2633903X24000114","url":null,"abstract":"<p><p>This manuscript showcases the latest advancements in deepImageJ, a pivotal Fiji/ImageJ plugin for bioimage analysis in life sciences. The plugin, known for its user-friendly interface, facilitates the application of diverse pre-trained convolutional neural networks to custom data. The manuscript demonstrates several deepImageJ capabilities, particularly in deploying complex pipelines, three-dimensional (3D) image analysis, and processing large images. A key development is the integration of the Java Deep Learning Library, expanding deepImageJ's compatibility with various deep learning (DL) frameworks, including TensorFlow, PyTorch, and ONNX. This allows for running multiple engines within a single Fiji/ImageJ instance, streamlining complex bioimage analysis workflows. The manuscript details three case studies to demonstrate these capabilities. The first case study explores integrated image-to-image translation followed by nuclei segmentation. The second case study focuses on 3D nuclei segmentation. The third case study showcases large image volume segmentation and compatibility with the BioImage Model Zoo. These use cases underscore deepImageJ's versatility and power to make advanced DLmore accessible and efficient for bioimage analysis. The new developments within deepImageJ seek to provide a more flexible and enriched user-friendly framework to enable next-generation image processing in life science.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e14"},"PeriodicalIF":0.0,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-blur: Blind identification and deblurring with convolutional neural networks. 深度模糊:卷积神经网络的盲识别和去模糊。
Pub Date : 2024-11-15 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000096
Valentin Debarnot, Pierre Weiss

We propose a neural network architecture and a training procedure to estimate blurring operators and deblur images from a single degraded image. Our key assumption is that the forward operators can be parameterized by a low-dimensional vector. The models we consider include a description of the point spread function with Zernike polynomials in the pupil plane or product-convolution expansions, which incorporate space-varying operators. Numerical experiments show that the proposed method can accurately and robustly recover the blur parameters even for large noise levels. For a convolution model, the average signal-to-noise ratio of the recovered point spread function ranges from 13 dB in the noiseless regime to 8 dB in the high-noise regime. In comparison, the tested alternatives yield negative values. This operator estimate can then be used as an input for an unrolled neural network to deblur the image. Quantitative experiments on synthetic data demonstrate that this method outperforms other commonly used methods both perceptually and in terms of SSIM. The algorithm can process a 512 512 image under a second on a consumer graphics card and does not require any human interaction once the operator parameterization has been set up.1.

我们提出了一个神经网络架构和一个训练程序来估计模糊算子并从单个退化图像中去模糊图像。我们的关键假设是前向运算符可以由低维向量参数化。我们考虑的模型包括用瞳孔平面上的泽尼克多项式或包含空间变化算子的积卷积展开来描述点扩展函数。数值实验表明,该方法即使在较大噪声水平下也能准确、鲁棒地恢复模糊参数。对于卷积模型,恢复点扩展函数的平均信噪比在无噪声条件下为13 dB,在高噪声条件下为8 dB。相比之下,测试的替代方案产生负值。然后,这个算子估计可以用作展开神经网络的输入来消除图像的模糊。合成数据的定量实验表明,该方法在感知和SSIM方面都优于其他常用方法。该算法可以在一秒钟内处理消费类显卡上的512 512图像,并且一旦设置了操作员参数化,就不需要任何人工交互。
{"title":"Deep-blur: Blind identification and deblurring with convolutional neural networks.","authors":"Valentin Debarnot, Pierre Weiss","doi":"10.1017/S2633903X24000096","DOIUrl":"https://doi.org/10.1017/S2633903X24000096","url":null,"abstract":"<p><p>We propose a neural network architecture and a training procedure to estimate blurring operators and deblur images from a single degraded image. Our key assumption is that the forward operators can be parameterized by a low-dimensional vector. The models we consider include a description of the point spread function with Zernike polynomials in the pupil plane or product-convolution expansions, which incorporate space-varying operators. Numerical experiments show that the proposed method can accurately and robustly recover the blur parameters even for large noise levels. For a convolution model, the average signal-to-noise ratio of the recovered point spread function ranges from 13 dB in the noiseless regime to 8 dB in the high-noise regime. In comparison, the tested alternatives yield negative values. This operator estimate can then be used as an input for an unrolled neural network to deblur the image. Quantitative experiments on synthetic data demonstrate that this method outperforms other commonly used methods both perceptually and in terms of SSIM. The algorithm can process a 512 512 image under a second on a consumer graphics card and does not require any human interaction once the operator parameterization has been set up.<sup>1</sup>.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e13"},"PeriodicalIF":0.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring self-supervised learning biases for microscopy image representation. 探索显微镜图像表示的自监督学习偏差。
Pub Date : 2024-11-14 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X2400014X
Ihab Bendidi, Adrien Bardes, Ethan Cohen, Alexis Lamiable, Guillaume Bollot, Auguste Genovesio

Self-supervised representation learning (SSRL) in computer vision relies heavily on simple image transformations such as random rotation, crops, or illumination to learn meaningful and invariant features. Despite acknowledged importance, there is a lack of comprehensive exploration of the impact of transformation choice in the literature. Our study delves into this relationship, specifically focusing on microscopy imaging with subtle cell phenotype differences. We reveal that transformation design acts as a form of either unwanted or beneficial supervision, impacting feature clustering and representation relevance. Importantly, these effects vary based on class labels in a supervised dataset. In microscopy images, transformation design significantly influences the representation, introducing imperceptible yet strong biases. We demonstrate that strategic transformation selection, based on desired feature invariance, drastically improves classification performance and representation quality, even with limited training samples.

计算机视觉中的自监督表示学习(SSRL)在很大程度上依赖于简单的图像变换,如随机旋转、作物或照明,以学习有意义和不变的特征。尽管承认转型选择的重要性,但文献中缺乏对转型选择影响的全面探索。我们的研究深入到这种关系,特别关注显微镜成像与微妙的细胞表型差异。我们揭示了转换设计作为一种有害或有益的监督形式,影响特征聚类和表示相关性。重要的是,这些效果根据监督数据集中的类标签而变化。在显微镜图像中,变换设计显著影响表征,引入难以察觉但强烈的偏差。我们证明了基于期望特征不变性的策略转换选择,即使在有限的训练样本下也能极大地提高分类性能和表示质量。
{"title":"Exploring self-supervised learning biases for microscopy image representation.","authors":"Ihab Bendidi, Adrien Bardes, Ethan Cohen, Alexis Lamiable, Guillaume Bollot, Auguste Genovesio","doi":"10.1017/S2633903X2400014X","DOIUrl":"https://doi.org/10.1017/S2633903X2400014X","url":null,"abstract":"<p><p>Self-supervised representation learning (SSRL) in computer vision relies heavily on simple image transformations such as random rotation, crops, or illumination to learn meaningful and invariant features. Despite acknowledged importance, there is a lack of comprehensive exploration of the impact of transformation choice in the literature. Our study delves into this relationship, specifically focusing on microscopy imaging with subtle cell phenotype differences. We reveal that transformation design acts as a form of either unwanted or beneficial supervision, impacting feature clustering and representation relevance. Importantly, these effects vary based on class labels in a supervised dataset. In microscopy images, transformation design significantly influences the representation, introducing imperceptible yet strong biases. We demonstrate that strategic transformation selection, based on desired feature invariance, drastically improves classification performance and representation quality, even with limited training samples.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e12"},"PeriodicalIF":0.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topology-based segmentation of 3D confocal images of emerging hematopoietic stem cells in the zebrafish embryo. 基于拓扑的斑马鱼胚胎新生造血干细胞三维共聚焦图像分割。
Pub Date : 2024-11-11 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000102
G Nardi, L Torcq, A A Schmidt, J-C Olivo-Marin

We develop a novel method for image segmentation of 3D confocal microscopy images of emerging hematopoietic stem cells. The method is based on the theory of persistent homology and uses an optimal threshold to select the most persistent cycles in the persistence diagram. This enables the segmentation of the image's most contrasted and representative shapes. Coupling this segmentation method with a meshing algorithm, we define a pipeline for 3D reconstruction of confocal volumes. Compared to related methods, this approach improves shape segmentation, is more ergonomic to automatize, and has fewer parameters. We apply it to the segmentation of membranes, at subcellular resolution, of cells involved in the endothelial-to-hematopoietic transition (EHT) in the zebrafish embryos.

我们开发了一种新的方法,用于图像分割的三维共聚焦显微镜图像的新兴造血干细胞。该方法基于持久同源性理论,使用最优阈值选择持久性图中最持久的循环。这样就可以分割出图像中对比度最高和最具代表性的形状。将此分割方法与网格划分算法相结合,定义了用于共聚焦体三维重建的管道。与相关方法相比,该方法改进了形状分割,更符合人体工程学,自动化程度更高,参数更少。我们将其应用于斑马鱼胚胎中参与内皮到造血转化(EHT)的细胞的亚细胞分辨率的膜分割。
{"title":"Topology-based segmentation of 3D confocal images of emerging hematopoietic stem cells in the zebrafish embryo.","authors":"G Nardi, L Torcq, A A Schmidt, J-C Olivo-Marin","doi":"10.1017/S2633903X24000102","DOIUrl":"https://doi.org/10.1017/S2633903X24000102","url":null,"abstract":"<p><p>We develop a novel method for image segmentation of 3D confocal microscopy images of emerging hematopoietic stem cells. The method is based on the theory of persistent homology and uses an optimal threshold to select the most persistent cycles in the persistence diagram. This enables the segmentation of the image's most contrasted and representative shapes. Coupling this segmentation method with a meshing algorithm, we define a pipeline for 3D reconstruction of confocal volumes. Compared to related methods, this approach improves shape segmentation, is more ergonomic to automatize, and has fewer parameters. We apply it to the segmentation of membranes, at subcellular resolution, of cells involved in the endothelial-to-hematopoietic transition (EHT) in the zebrafish embryos.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e11"},"PeriodicalIF":0.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704129/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing or believing in hyperplexed spatial proteomics via antibodies: New and old biases for an image-based technology. 通过抗体进行超复合物空间蛋白质组学的 "眼见为实":基于图像技术的新旧偏见。
Pub Date : 2024-10-23 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000138
Maddalena M Bolognesi, Lorenzo Dall'Olio, Amy Maerten, Simone Borghesi, Gastone Castellani, Giorgio Cattoretti

Hyperplexed in-situ targeted proteomics via antibody immunodetection (i.e., >15 markers) is changing how we classify cells and tissues. Differently from other high-dimensional single-cell assays (flow cytometry, single-cell RNA sequencing), the human eye is a necessary component in multiple procedural steps: image segmentation, signal thresholding, antibody validation, and iconographic rendering. Established methods complement the human image evaluation, but may carry undisclosed biases in such a new context, therefore we re-evaluate all the steps in hyperplexed proteomics. We found that the human eye can discriminate less than 64 out of 256 gray levels and has limitations in discriminating luminance levels in conventional histology images. Furthermore, only images containing visible signals are selected and eye-guided digital thresholding separates signal from noise. BRAQUE, a hyperplexed proteomic tool, can extract, in a marker-agnostic fashion, granular information from markers which have a very low signal-to-noise ratio and therefore are not visualized by traditional visual rendering. By analyzing a public human lymph node dataset, we also found unpredicted staining results by validated antibodies, which highlight the need to upgrade the definition of antibody specificity in hyperplexed immunostaining. Spatially hyperplexed methods upgrade and supplant traditional image-based analysis of tissue immunostaining, beyond the human eye contribution.

通过抗体免疫检测(即大于 15 个标记物)进行的超复合原位靶向蛋白质组学正在改变我们对细胞和组织进行分类的方式。与其他高维单细胞检测(流式细胞仪、单细胞 RNA 测序)不同,人眼是图像分割、信号阈值、抗体验证和图标渲染等多个程序步骤的必要组成部分。已有的方法是对人类图像评估的补充,但在这种新情况下可能会出现未披露的偏差,因此我们重新评估了超复杂蛋白质组学的所有步骤。我们发现,人眼只能分辨 256 个灰度级中的不到 64 个,而且在分辨传统组织学图像的亮度级方面也有局限性。此外,只有包含可见信号的图像才会被选中,而人眼引导的数字阈值能将信号与噪声分离开来。BRAQUE 是一种超复合物蛋白质组学工具,能以标记识别的方式从信噪比极低的标记物中提取颗粒信息,因此传统的可视化渲染方法无法将其可视化。通过分析公开的人类淋巴结数据集,我们还发现了有效抗体无法预测的染色结果,这凸显了在超复合物免疫染色中提升抗体特异性定义的必要性。空间超复合物方法提升并取代了传统的基于图像的组织免疫染色分析,超越了人眼的贡献。
{"title":"Seeing or believing in hyperplexed spatial proteomics via antibodies: New and old biases for an image-based technology.","authors":"Maddalena M Bolognesi, Lorenzo Dall'Olio, Amy Maerten, Simone Borghesi, Gastone Castellani, Giorgio Cattoretti","doi":"10.1017/S2633903X24000138","DOIUrl":"10.1017/S2633903X24000138","url":null,"abstract":"<p><p>Hyperplexed in-situ targeted proteomics via antibody immunodetection (i.e., >15 markers) is changing how we classify cells and tissues. Differently from other high-dimensional single-cell assays (flow cytometry, single-cell RNA sequencing), the human eye is a necessary component in multiple procedural steps: image segmentation, signal thresholding, antibody validation, and iconographic rendering. Established methods complement the human image evaluation, but may carry undisclosed biases in such a new context, therefore we re-evaluate all the steps in hyperplexed proteomics. We found that the human eye can discriminate less than 64 out of 256 gray levels and has limitations in discriminating luminance levels in conventional histology images. Furthermore, only images containing visible signals are selected and eye-guided digital thresholding separates signal from noise. BRAQUE, a hyperplexed proteomic tool, can extract, in a marker-agnostic fashion, granular information from markers which have a very low signal-to-noise ratio and therefore are not visualized by traditional visual rendering. By analyzing a public human lymph node dataset, we also found unpredicted staining results by validated antibodies, which highlight the need to upgrade the definition of antibody specificity in hyperplexed immunostaining. Spatially hyperplexed methods upgrade and supplant traditional image-based analysis of tissue immunostaining, beyond the human eye contribution.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e10"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing interpretable features in computational super-resolution microscopy via regularized latent search. 通过正则化潜在搜索重建计算超分辨率显微镜中的可解释特征。
Pub Date : 2024-05-30 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000084
Marzieh Gheisari, Auguste Genovesio

Supervised deep learning approaches can artificially increase the resolution of microscopy images by learning a mapping between two image resolutions or modalities. However, such methods often require a large set of hard-to-get low-res/high-res image pairs and produce synthetic images with a moderate increase in resolution. Conversely, recent methods based on generative adversarial network (GAN) latent search offered a drastic increase in resolution without the need of paired images. However, they offer limited reconstruction of the high-resolution (HR) image interpretable features. Here, we propose a robust super-resolution (SR) method based on regularized latent search (RLS) that offers an actionable balance between fidelity to the ground truth (GT) and realism of the recovered image given a distribution prior. The latter allows to split the analysis of a low-resolution (LR) image into a computational SR task performed by deep learning followed by a quantification task performed by a handcrafted algorithm based on interpretable biological features. This two-step process holds potential for various applications such as diagnostics on mobile devices, where the main aim is not to recover the HR details of a specific sample but rather to obtain HR images that preserve explainable and quantifiable differences between conditions.

有监督的深度学习方法可以通过学习两种图像分辨率或模式之间的映射,人为提高显微图像的分辨率。然而,这类方法通常需要大量难以获得的低分辨率/高分辨率图像对,生成的合成图像分辨率也只能适度提高。相反,最近基于生成式对抗网络(GAN)潜搜索的方法无需配对图像即可大幅提高分辨率。然而,这些方法对高分辨率(HR)图像可解释特征的重建有限。在此,我们提出了一种基于正则化潜在搜索(RLS)的鲁棒性超分辨率(SR)方法,该方法在忠实于地面实况(GT)和给定分布先验的恢复图像逼真度之间实现了可操作的平衡。后者可以将低分辨率(LR)图像的分析拆分为由深度学习执行的计算 SR 任务和由基于可解释生物特征的手工算法执行的量化任务。这种两步法可用于各种应用,如移动设备诊断,其主要目的不是恢复特定样本的 HR 细节,而是获取 HR 图像,以保留不同条件下可解释和可量化的差异。
{"title":"Reconstructing interpretable features in computational super-resolution microscopy via regularized latent search.","authors":"Marzieh Gheisari, Auguste Genovesio","doi":"10.1017/S2633903X24000084","DOIUrl":"10.1017/S2633903X24000084","url":null,"abstract":"<p><p>Supervised deep learning approaches can artificially increase the resolution of microscopy images by learning a mapping between two image resolutions or modalities. However, such methods often require a large set of hard-to-get low-res/high-res image pairs and produce synthetic images with a moderate increase in resolution. Conversely, recent methods based on generative adversarial network (GAN) latent search offered a drastic increase in resolution without the need of paired images. However, they offer limited reconstruction of the high-resolution (HR) image interpretable features. Here, we propose a robust super-resolution (SR) method based on regularized latent search (RLS) that offers an actionable balance between fidelity to the ground truth (GT) and realism of the recovered image given a distribution prior. The latter allows to split the analysis of a low-resolution (LR) image into a computational SR task performed by deep learning followed by a quantification task performed by a handcrafted algorithm based on interpretable biological features. This two-step process holds potential for various applications such as diagnostics on mobile devices, where the main aim is not to recover the HR details of a specific sample but rather to obtain HR images that preserve explainable and quantifiable differences between conditions.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e8"},"PeriodicalIF":0.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418082/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algebraic Constraints and Algorithms for Common Lines in Cryo-EM 低温电子显微镜共线的代数约束和算法
Pub Date : 2024-05-16 DOI: 10.1017/s2633903x24000072
Tommi Muller, Adriana L. Duncan, Eric J. Verbeke, Joe Kileel
{"title":"Algebraic Constraints and Algorithms for Common Lines in Cryo-EM","authors":"Tommi Muller, Adriana L. Duncan, Eric J. Verbeke, Joe Kileel","doi":"10.1017/s2633903x24000072","DOIUrl":"https://doi.org/10.1017/s2633903x24000072","url":null,"abstract":"","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"21 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140967427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TomoNet: A streamlined cryogenic electron tomography software pipeline with automatic particle picking on flexible lattices. TomoNet:简化的低温电子断层扫描软件流水线,可在灵活的晶格上自动拾取粒子。
Pub Date : 2024-05-09 eCollection Date: 2024-01-01 DOI: 10.1017/S2633903X24000060
Hui Wang, Shiqing Liao, Xinye Yu, Jiayan Zhang, Z Hong Zhou

Cryogenic electron tomography (cryoET) is capable of determining in situ biological structures of molecular complexes at near-atomic resolution by averaging half a million subtomograms. While abundant complexes/particles are often clustered in arrays, precisely locating and seamlessly averaging such particles across many tomograms present major challenges. Here, we developed TomoNet, a software package with a modern graphical user interface to carry out the entire pipeline of cryoET and subtomogram averaging to achieve high resolution. TomoNet features built-in automatic particle picking and three-dimensional (3D) classification functions and integrates commonly used packages to streamline high-resolution subtomogram averaging for structures in 1D, 2D, or 3D arrays. Automatic particle picking is accomplished in two complementary ways: one based on template matching and the other using deep learning. TomoNet's hierarchical file organization and visual display facilitate efficient data management as required for large cryoET datasets. Applications of TomoNet to three types of datasets demonstrate its capability of efficient and accurate particle picking on flexible and imperfect lattices to obtain high-resolution 3D biological structures: virus-like particles, bacterial surface layers within cellular lamellae, and membranes decorated with nuclear egress protein complexes. These results demonstrate TomoNet's potential for broad applications to various cryoET projects targeting high-resolution in situ structures.

低温电子断层扫描(cryoET)能够通过平均 50 万个子断层图,以接近原子的分辨率确定分子复合物的原位生物结构。虽然丰富的复合物/颗粒通常以阵列形式聚集,但在许多断层扫描中精确定位和无缝平均这些颗粒是一项重大挑战。在此,我们开发了 TomoNet,这是一款具有现代图形用户界面的软件包,可执行低温电子显微镜和子断层图平均的整个流程,以实现高分辨率。TomoNet 具有内置的自动粒子拾取和三维(3D)分类功能,并集成了常用的软件包,可简化一维、二维或三维阵列结构的高分辨率子图平均。自动粒子拾取是通过两种互补的方式完成的:一种是基于模板匹配,另一种是使用深度学习。TomoNet 的分层文件组织和可视化显示有助于实现大型 cryoET 数据集所需的高效数据管理。TomoNet 在三种数据集上的应用表明,它能够在灵活和不完美的晶格上高效、准确地拾取粒子,从而获得高分辨率的三维生物结构:类病毒粒子、细胞薄片内的细菌表层以及由核出口蛋白复合物装饰的膜。这些结果表明,TomoNet 有潜力广泛应用于各种以高分辨率原位结构为目标的低温电子显微镜项目。
{"title":"TomoNet: A streamlined cryogenic electron tomography software pipeline with automatic particle picking on flexible lattices.","authors":"Hui Wang, Shiqing Liao, Xinye Yu, Jiayan Zhang, Z Hong Zhou","doi":"10.1017/S2633903X24000060","DOIUrl":"10.1017/S2633903X24000060","url":null,"abstract":"<p><p>Cryogenic electron tomography (cryoET) is capable of determining <i>in situ</i> biological structures of molecular complexes at near-atomic resolution by averaging half a million subtomograms. While abundant complexes/particles are often clustered in arrays, precisely locating and seamlessly averaging such particles across many tomograms present major challenges. Here, we developed TomoNet, a software package with a modern graphical user interface to carry out the entire pipeline of cryoET and subtomogram averaging to achieve high resolution. TomoNet features built-in automatic particle picking and three-dimensional (3D) classification functions and integrates commonly used packages to streamline high-resolution subtomogram averaging for structures in 1D, 2D, or 3D arrays. Automatic particle picking is accomplished in two complementary ways: one based on template matching and the other using deep learning. TomoNet's hierarchical file organization and visual display facilitate efficient data management as required for large cryoET datasets. Applications of TomoNet to three types of datasets demonstrate its capability of efficient and accurate particle picking on flexible and imperfect lattices to obtain high-resolution 3D biological structures: virus-like particles, bacterial surface layers within cellular lamellae, and membranes decorated with nuclear egress protein complexes. These results demonstrate TomoNet's potential for broad applications to various cryoET projects targeting high-resolution <i>in situ</i> structures.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"4 ","pages":"e7"},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140495/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biological imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1