Pub Date : 2023-03-09eCollection Date: 2023-01-01DOI: 10.1017/S2633903X23000065
Guy Sharon, Yoel Shkolnisky, Tamir Bendory
Different tasks in the computational pipeline of single-particle cryo-electron microscopy (cryo-EM) require enhancing the quality of the highly noisy raw images. To this end, we develop an efficient algorithm for signal enhancement of cryo-EM images. The enhanced images can be used for a variety of downstream tasks, such as two-dimensional classification, removing uninformative images, constructing ab initio models, generating templates for particle picking, providing a quick assessment of the data set, dimensionality reduction, and symmetry detection. The algorithm includes built-in quality measures to assess its performance and alleviate the risk of model bias. We demonstrate the effectiveness of the proposed algorithm on several experimental data sets. In particular, we show that the quality of the resulting images is high enough to produce ab initio models of Å resolution. The algorithm is accompanied by a publicly available, documented, and easy-to-use code.
{"title":"Signal enhancement for two-dimensional cryo-EM data processing.","authors":"Guy Sharon, Yoel Shkolnisky, Tamir Bendory","doi":"10.1017/S2633903X23000065","DOIUrl":"10.1017/S2633903X23000065","url":null,"abstract":"<p><p>Different tasks in the computational pipeline of single-particle cryo-electron microscopy (cryo-EM) require enhancing the quality of the highly noisy raw images. To this end, we develop an efficient algorithm for signal enhancement of cryo-EM images. The enhanced images can be used for a variety of downstream tasks, such as two-dimensional classification, removing uninformative images, constructing ab initio models, generating templates for particle picking, providing a quick assessment of the data set, dimensionality reduction, and symmetry detection. The algorithm includes built-in quality measures to assess its performance and alleviate the risk of model bias. We demonstrate the effectiveness of the proposed algorithm on several experimental data sets. In particular, we show that the quality of the resulting images is high enough to produce ab initio models of Å resolution. The algorithm is accompanied by a publicly available, documented, and easy-to-use code.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e7"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48590663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-27eCollection Date: 2023-01-01DOI: 10.1017/S2633903X23000041
Anca Caranfil, Yann Le Cunff, Charles Kervrann
The dynamics and fusion of vesicles during the last steps of exocytosis are not well established yet in cell biology. An open issue is the characterization of the diffusion process at the plasma membrane. Total internal reflection fluorescence microscopy (TIRFM) has been successfully used to analyze the coordination of proteins involved in this mechanism. It enables to capture dynamics of proteins with high frame rate and reasonable signal-to-noise values. Nevertheless, methodological approaches that can analyze and estimate diffusion in local small areas at the scale of a single diffusing spot within cells, are still lacking. To address this issue, we propose a novel correlation-based method for local diffusion estimation. As a starting point, we consider Fick's second law of diffusion that relates the diffusive flux to the gradient of the concentration. Then, we derive an explicit parametric model which is further fitted to time-correlation signals computed from regions of interest (ROI) containing individual spots. Our modeling and Bayesian estimation framework are well appropriate to represent isolated diffusion events and are robust to noise, ROI sizes, and localization of spots in ROIs. The performance of BayesTICS is shown on both synthetic and real TIRFM images depicting Transferrin Receptor proteins.
{"title":"BayesTICS: Local temporal image correlation spectroscopy and Bayesian simulation technique for sparse estimation of diffusion in fluorescence imaging.","authors":"Anca Caranfil, Yann Le Cunff, Charles Kervrann","doi":"10.1017/S2633903X23000041","DOIUrl":"10.1017/S2633903X23000041","url":null,"abstract":"<p><p>The dynamics and fusion of vesicles during the last steps of exocytosis are not well established yet in cell biology. An open issue is the characterization of the diffusion process at the plasma membrane. Total internal reflection fluorescence microscopy (TIRFM) has been successfully used to analyze the coordination of proteins involved in this mechanism. It enables to capture dynamics of proteins with high frame rate and reasonable signal-to-noise values. Nevertheless, methodological approaches that can analyze and estimate diffusion in local small areas at the scale of a single diffusing spot within cells, are still lacking. To address this issue, we propose a novel correlation-based method for local diffusion estimation. As a starting point, we consider Fick's second law of diffusion that relates the diffusive flux to the gradient of the concentration. Then, we derive an explicit parametric model which is further fitted to time-correlation signals computed from regions of interest (ROI) containing individual spots. Our modeling and Bayesian estimation framework are well appropriate to represent isolated diffusion events and are robust to noise, ROI sizes, and localization of spots in ROIs. The performance of BayesTICS is shown on both synthetic and real TIRFM images depicting Transferrin Receptor proteins.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e5"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936362/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48641425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-09eCollection Date: 2023-01-01DOI: 10.1017/S2633903X2300003X
Qinwen Huang, Ye Zhou, Hsuan-Fu Liu, Alberto Bartesaghi
Single-particle cryo-electron microscopy (cryo-EM) is a powerful imaging modality capable of visualizing proteins and macromolecular complexes at near-atomic resolution. The low electron-doses used to prevent radiation damage to the biological samples, however, result in images where the power of the noise is 100 times greater than the power of the signal. To overcome these low signal-to-noise ratios (SNRs), hundreds of thousands of particle projections are averaged to determine the three-dimensional structure of the molecule of interest. The sampling requirements of high-resolution imaging impose limitations on the pixel sizes that can be used for acquisition, limiting the size of the field of view and requiring data collection sessions of several days to accumulate sufficient numbers of particles. Meanwhile, recent image super-resolution (SR) techniques based on neural networks have shown state-of-the-art performance on natural images. Building on these advances, here, we present a multiple-image SR algorithm based on deep internal learning designed specifically to work under low-SNR conditions. Our approach leverages the internal image statistics of cryo-EM movies and does not require training on ground-truth data. When applied to single-particle datasets of apoferritin and T20S proteasome, we show that the resolution of the 3D structure obtained from SR micrographs can surpass the limits imposed by the imaging system. Our results indicate that the combination of low magnification imaging with in silico image SR has the potential to accelerate cryo-EM data collection by virtue of including more particles in each exposure and doing so without sacrificing resolution.
{"title":"Multiple-image super-resolution of cryo-electron micrographs based on deep internal learning.","authors":"Qinwen Huang, Ye Zhou, Hsuan-Fu Liu, Alberto Bartesaghi","doi":"10.1017/S2633903X2300003X","DOIUrl":"10.1017/S2633903X2300003X","url":null,"abstract":"<p><p>Single-particle cryo-electron microscopy (cryo-EM) is a powerful imaging modality capable of visualizing proteins and macromolecular complexes at near-atomic resolution. The low electron-doses used to prevent radiation damage to the biological samples, however, result in images where the power of the noise is 100 times greater than the power of the signal. To overcome these low signal-to-noise ratios (SNRs), hundreds of thousands of particle projections are averaged to determine the three-dimensional structure of the molecule of interest. The sampling requirements of high-resolution imaging impose limitations on the pixel sizes that can be used for acquisition, limiting the size of the field of view and requiring data collection sessions of several days to accumulate sufficient numbers of particles. Meanwhile, recent image super-resolution (SR) techniques based on neural networks have shown state-of-the-art performance on natural images. Building on these advances, here, we present a multiple-image SR algorithm based on deep internal learning designed specifically to work under low-SNR conditions. Our approach leverages the internal image statistics of cryo-EM movies and does not require training on ground-truth data. When applied to single-particle datasets of apoferritin and T20S proteasome, we show that the resolution of the 3D structure obtained from SR micrographs can surpass the limits imposed by the imaging system. Our results indicate that the combination of low magnification imaging with in silico image SR has the potential to accelerate cryo-EM data collection by virtue of including more particles in each exposure and doing so without sacrificing resolution.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e3"},"PeriodicalIF":0.0,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46023194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01eCollection Date: 2023-01-01DOI: 10.1017/S2633903X23000016
Angela Zhang, S Shailja, Cezar Borba, Yishen Miao, Michael Goebel, Raphael Ruschel, Kerrianne Ryan, William Smith, B S Manjunath
[This corrects the article DOI: 10.1017/S2633903X2200006X.].
[此处更正了文章 DOI:10.1017/S2633903X2200006X]。
{"title":"Erratum: Automatic classification and neurotransmitter prediction of synapses in electron microscopy - CORRIGENDUM.","authors":"Angela Zhang, S Shailja, Cezar Borba, Yishen Miao, Michael Goebel, Raphael Ruschel, Kerrianne Ryan, William Smith, B S Manjunath","doi":"10.1017/S2633903X23000016","DOIUrl":"10.1017/S2633903X23000016","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1017/S2633903X2200006X.].</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e1"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45430403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1017/s2633903x2300017x
Chaichontat Sriworarat, Annie Nguyen, Nicholas J. Eagles, Leonardo Collado-Torres, Keri Martinowich, Kristen R. Maynard, Stephanie C. Hicks
Abstract High-resolution and multiplexed imaging techniques are giving us an increasingly detailed observation of a biological system. However, sharing, exploring, and customizing the visualization of large multidimensional images can be a challenge. Here, we introduce Samui, a performant and interactive image visualization tool that runs completely in the web browser. Samui is specifically designed for fast image visualization and annotation and enables users to browse through large images and their selected features within seconds of receiving a link. We demonstrate the broad utility of Samui with images generated with two platforms: Vizgen MERFISH and 10x Genomics Visium Spatial Gene Expression. Samui along with example datasets is available at https://samuibrowser.com .
{"title":"Performant web-based interactive visualization tool for spatially-resolved transcriptomics experiments","authors":"Chaichontat Sriworarat, Annie Nguyen, Nicholas J. Eagles, Leonardo Collado-Torres, Keri Martinowich, Kristen R. Maynard, Stephanie C. Hicks","doi":"10.1017/s2633903x2300017x","DOIUrl":"https://doi.org/10.1017/s2633903x2300017x","url":null,"abstract":"Abstract High-resolution and multiplexed imaging techniques are giving us an increasingly detailed observation of a biological system. However, sharing, exploring, and customizing the visualization of large multidimensional images can be a challenge. Here, we introduce Samui, a performant and interactive image visualization tool that runs completely in the web browser. Samui is specifically designed for fast image visualization and annotation and enables users to browse through large images and their selected features within seconds of receiving a link. We demonstrate the broad utility of Samui with images generated with two platforms: Vizgen MERFISH and 10x Genomics Visium Spatial Gene Expression. Samui along with example datasets is available at https://samuibrowser.com .","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135733916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The cell cycle is a complex biological phenomenon, which plays an important role in many cell biological processes and disease states. Machine learning is emerging to be a pivotal technique for the study of the cell cycle, resulting in a number of available tools and models for the analysis of the cell cycle. Most, however, heavily rely on expert annotations, prior knowledge of mechanisms, and imaging with several fluorescent markers to train their models. Many are also limited to processing only the spatial information in the cell images. In this work, we describe a different approach based on representation learning to construct a manifold of the cell life cycle. We trained our model such that the representations are learned without exhaustive annotations nor assumptions. Moreover, our model uses microscopy images derived from a single fluorescence channel and utilizes both the spatial and temporal information in these images. We show that even with fewer channels and self-supervision, information relevant to cell cycle analysis such as staging and estimation of cycle duration can still be extracted, which demonstrates the potential of our approach to aid future cell cycle studies and in discovery cell biology to probe and understand novel dynamic systems.
{"title":"Annotation-free learning of a spatio-temporal manifold of the cell life cycle","authors":"Kristofer delas Peñas, Mariia Dmitrieva, Dominic Waithe, Jens Rittscher","doi":"10.1017/s2633903x23000193","DOIUrl":"https://doi.org/10.1017/s2633903x23000193","url":null,"abstract":"Abstract The cell cycle is a complex biological phenomenon, which plays an important role in many cell biological processes and disease states. Machine learning is emerging to be a pivotal technique for the study of the cell cycle, resulting in a number of available tools and models for the analysis of the cell cycle. Most, however, heavily rely on expert annotations, prior knowledge of mechanisms, and imaging with several fluorescent markers to train their models. Many are also limited to processing only the spatial information in the cell images. In this work, we describe a different approach based on representation learning to construct a manifold of the cell life cycle. We trained our model such that the representations are learned without exhaustive annotations nor assumptions. Moreover, our model uses microscopy images derived from a single fluorescence channel and utilizes both the spatial and temporal information in these images. We show that even with fewer channels and self-supervision, information relevant to cell cycle analysis such as staging and estimation of cycle duration can still be extracted, which demonstrates the potential of our approach to aid future cell cycle studies and in discovery cell biology to probe and understand novel dynamic systems.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136047800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-02-03DOI: 10.1017/s2633903x23000028
Nicholas F Marshall, Oscar Mickelin, Yunpeng Shi, Amit Singer
Principal component analysis (PCA) plays an important role in the analysis of cryo-electron microscopy (cryo-EM) images for various tasks such as classification, denoising, compression, and ab initio modeling. We introduce a fast method for estimating a compressed representation of the 2-D covariance matrix of noisy cryo-EM projection images affected by radial point spread functions that enables fast PCA computation. Our method is based on a new algorithm for expanding images in the Fourier-Bessel basis (the harmonics on the disk), which provides a convenient way to handle the effect of the contrast transfer functions. For N images of size L × L, our method has time complexity O(NL3 + L4) and space complexity O(NL2 + L3). In contrast to previous work, these complexities are independent of the number of different contrast transfer functions of the images. We demonstrate our approach on synthetic and experimental data and show acceleration by factors of up to two orders of magnitude.
主成分分析(PCA)在冷冻电子显微镜(cryo-EM)图像分析中发挥着重要作用,可用于分类、去噪、压缩和 ab initio 建模等各种任务。我们介绍了一种快速方法,用于估计受径向点扩散函数影响的噪声冷冻电镜投影图像的二维协方差矩阵的压缩表示,从而实现快速 PCA 计算。我们的方法基于一种在傅立叶-贝塞尔基(圆盘上的谐波)上扩展图像的新算法,它为处理对比度传递函数的影响提供了一种便捷的方法。对于大小为 L × L 的 N 幅图像,我们的方法的时间复杂度为 O(NL3 + L4),空间复杂度为 O(NL2 + L3)。与之前的研究相比,这些复杂度与图像不同对比度传递函数的数量无关。我们在合成数据和实验数据上演示了我们的方法,结果表明加速度可达两个数量级。
{"title":"Fast principal component analysis for cryo-electron microscopy images.","authors":"Nicholas F Marshall, Oscar Mickelin, Yunpeng Shi, Amit Singer","doi":"10.1017/s2633903x23000028","DOIUrl":"10.1017/s2633903x23000028","url":null,"abstract":"<p><p>Principal component analysis (PCA) plays an important role in the analysis of cryo-electron microscopy (cryo-EM) images for various tasks such as classification, denoising, compression, and ab initio modeling. We introduce a fast method for estimating a compressed representation of the 2-D covariance matrix of noisy cryo-EM projection images affected by radial point spread functions that enables fast PCA computation. Our method is based on a new algorithm for expanding images in the Fourier-Bessel basis (the harmonics on the disk), which provides a convenient way to handle the effect of the contrast transfer functions. For <i>N</i> images of size <i>L</i> × <i>L</i>, our method has time complexity <i>O</i>(<i>NL</i><sup>3</sup> + <i>L</i><sup>4</sup>) and space complexity <i>O</i>(<i>NL</i><sup>2</sup> + <i>L</i><sup>3</sup>). In contrast to previous work, these complexities are independent of the number of different contrast transfer functions of the images. We demonstrate our approach on synthetic and experimental data and show acceleration by factors of up to two orders of magnitude.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"3 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10465116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10127033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1017/s2633903x23000053
Andrea Behanova, Christophe Avenel, Axel Andersson, Eduard Chelebian, Anna Klemm, Lina Wik, Arne Östman, Carolina Wählby
Abstract Large-scale multiplex tissue analysis aims to understand processes such as development and tumor formation by studying the occurrence and interaction of cells in local environments in, for example, tissue samples from patient cohorts. A typical procedure in the analysis is to delineate individual cells, classify them into cell types, and analyze their spatial relationships. All steps come with a number of challenges, and to address them and identify the bottlenecks of the analysis, it is necessary to include quality control tools in the analysis workflow. This makes it possible to optimize the steps and adjust settings in order to get better and more precise results. Additionally, the development of automated approaches for tissue analysis requires visual verification to reduce skepticism with regard to the accuracy of the results. Quality control tools could be used to build users’ trust in automated approaches. In this paper, we present three plugins for visualization and quality control in large-scale multiplex tissue analysis of microscopy images. The first plugin focuses on the quality of cell staining, the second one was made for interactive evaluation and comparison of different cell classification results, and the third one serves for reviewing interactions of different cell types.
{"title":"Visualization and quality control tools for large-scale multiplex tissue analysis in TissUUmaps3","authors":"Andrea Behanova, Christophe Avenel, Axel Andersson, Eduard Chelebian, Anna Klemm, Lina Wik, Arne Östman, Carolina Wählby","doi":"10.1017/s2633903x23000053","DOIUrl":"https://doi.org/10.1017/s2633903x23000053","url":null,"abstract":"Abstract Large-scale multiplex tissue analysis aims to understand processes such as development and tumor formation by studying the occurrence and interaction of cells in local environments in, for example, tissue samples from patient cohorts. A typical procedure in the analysis is to delineate individual cells, classify them into cell types, and analyze their spatial relationships. All steps come with a number of challenges, and to address them and identify the bottlenecks of the analysis, it is necessary to include quality control tools in the analysis workflow. This makes it possible to optimize the steps and adjust settings in order to get better and more precise results. Additionally, the development of automated approaches for tissue analysis requires visual verification to reduce skepticism with regard to the accuracy of the results. Quality control tools could be used to build users’ trust in automated approaches. In this paper, we present three plugins for visualization and quality control in large-scale multiplex tissue analysis of microscopy images. The first plugin focuses on the quality of cell staining, the second one was made for interactive evaluation and comparison of different cell classification results, and the third one serves for reviewing interactions of different cell types.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135534442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-05eCollection Date: 2022-01-01DOI: 10.1017/S2633903X22000071
Shahar Seifer, Michael Elbaum
Thick specimens, as encountered in cryo-scanning transmission electron tomography, offer special challenges to conventional reconstruction workflows. The visibility of features, including gold nanoparticles introduced as fiducial markers, varies strongly through the tilt series. As a result, tedious manual refinement may be required in order to produce a successful alignment. Information from highly tilted views must often be excluded to the detriment of axial resolution in the reconstruction. We introduce here an approach to tilt series alignment based on identification of fiducial particle clusters that transform coherently in rotation, essentially those that lie at similar depth. Clusters are identified by comparison of tilted views with a single untilted reference, rather than with adjacent tilts. The software, called ClusterAlign, proves robust to poor signal to noise ratio and varying visibility of the individual fiducials and is successful in carrying the alignment to the ends of the tilt series where other methods tend to fail. ClusterAlign may be used to generate a list of tracked fiducials, to align a tilt series, or to perform a complete 3D reconstruction. Tools to evaluate alignment error by projection matching are included. Execution involves no manual intervention, and adherence to standard file formats facilitates an interface with other software, particularly IMOD/etomo, tomo3d, and tomoalign.
{"title":"ClusterAlign: A fiducial tracking and tilt series alignment tool for thick sample tomography.","authors":"Shahar Seifer, Michael Elbaum","doi":"10.1017/S2633903X22000071","DOIUrl":"10.1017/S2633903X22000071","url":null,"abstract":"<p><p>Thick specimens, as encountered in cryo-scanning transmission electron tomography, offer special challenges to conventional reconstruction workflows. The visibility of features, including gold nanoparticles introduced as fiducial markers, varies strongly through the tilt series. As a result, tedious manual refinement may be required in order to produce a successful alignment. Information from highly tilted views must often be excluded to the detriment of axial resolution in the reconstruction. We introduce here an approach to tilt series alignment based on identification of fiducial particle clusters that transform coherently in rotation, essentially those that lie at similar depth. Clusters are identified by comparison of tilted views with a single untilted reference, rather than with adjacent tilts. The software, called ClusterAlign, proves robust to poor signal to noise ratio and varying visibility of the individual fiducials and is successful in carrying the alignment to the ends of the tilt series where other methods tend to fail. ClusterAlign may be used to generate a list of tracked fiducials, to align a tilt series, or to perform a complete 3D reconstruction. Tools to evaluate alignment error by projection matching are included. Execution involves no manual intervention, and adherence to standard file formats facilitates an interface with other software, particularly IMOD/etomo, tomo3d, and tomoalign.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e7"},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936405/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47999576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-29eCollection Date: 2022-01-01DOI: 10.1017/S2633903X2200006X
Angela Zhang, S Shailja, Cezar Borba, Yishen Miao, Michael Goebel, Raphael Ruschel, Kerrianne Ryan, William Smith, B S Manjunath
This paper presents a deep-learning-based workflow to detect synapses and predict their neurotransmitter type in the primitive chordate Ciona intestinalis (Ciona) electron microscopic (EM) images. Identifying synapses from EM images to build a full map of connections between neurons is a labor-intensive process and requires significant domain expertise. Automation of synapse classification would hasten the generation and analysis of connectomes. Furthermore, inferences concerning neuron type and function from synapse features are in many cases difficult to make. Finding the connection between synapse structure and function is an important step in fully understanding a connectome. Class Activation Maps derived from the convolutional neural network provide insights on important features of synapses based on cell type and function. The main contribution of this work is in the differentiation of synapses by neurotransmitter type through the structural information in their EM images. This enables the prediction of neurotransmitter types for neurons in Ciona, which were previously unknown. The prediction model with code is available on GitHub.
{"title":"Automatic classification and neurotransmitter prediction of synapses in electron microscopy.","authors":"Angela Zhang, S Shailja, Cezar Borba, Yishen Miao, Michael Goebel, Raphael Ruschel, Kerrianne Ryan, William Smith, B S Manjunath","doi":"10.1017/S2633903X2200006X","DOIUrl":"10.1017/S2633903X2200006X","url":null,"abstract":"<p><p>This paper presents a deep-learning-based workflow to detect synapses and predict their neurotransmitter type in the primitive chordate <i>Ciona intestinalis</i> (<i>Ciona</i>) electron microscopic (EM) images. Identifying synapses from EM images to build a full map of connections between neurons is a labor-intensive process and requires significant domain expertise. Automation of synapse classification would hasten the generation and analysis of connectomes. Furthermore, inferences concerning neuron type and function from synapse features are in many cases difficult to make. Finding the connection between synapse structure and function is an important step in fully understanding a connectome. Class Activation Maps derived from the convolutional neural network provide insights on important features of synapses based on cell type and function. The main contribution of this work is in the differentiation of synapses by neurotransmitter type through the structural information in their EM images. This enables the prediction of neurotransmitter types for neurons in <i>Ciona</i>, which were previously unknown. The prediction model with code is available on GitHub.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e6"},"PeriodicalIF":0.0,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936391/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46758186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}