Pub Date : 2022-05-20eCollection Date: 2022-01-01DOI: 10.1017/S2633903X22000058
Ana Cayuela López, José A Gómez-Pedrero, Ana M O Blanco, Carlos Oscar S Sorzano
Fluorescence microscopy techniques have experienced a substantial increase in the visualization and analysis of many biological processes in life science. We describe a semiautomated and versatile tool called Cell-TypeAnalyzer to avoid the time-consuming and biased manual classification of cells according to cell types. It consists of an open-source plugin for Fiji or ImageJ to detect and classify cells in 2D images. Our workflow consists of (a) image preprocessing actions, data spatial calibration, and region of interest for analysis; (b) segmentation to isolate cells from background (optionally including user-defined preprocessing steps helping the identification of cells); (c) extraction of features from each cell; (d) filters to select relevant cells; (e) definition of specific criteria to be included in the different cell types; (f) cell classification; and (g) flexible analysis of the results. Our software provides a modular and flexible strategy to perform cell classification through a wizard-like graphical user interface in which the user is intuitively guided through each step of the analysis. This procedure may be applied in batch mode to multiple microscopy files. Once the analysis is set up, it can be automatically and efficiently performed on many images. The plugin does not require any programming skill and can analyze cells in many different acquisition setups.
{"title":"Cell-TypeAnalyzer: A flexible Fiji/ImageJ plugin to classify cells according to user-defined criteria.","authors":"Ana Cayuela López, José A Gómez-Pedrero, Ana M O Blanco, Carlos Oscar S Sorzano","doi":"10.1017/S2633903X22000058","DOIUrl":"10.1017/S2633903X22000058","url":null,"abstract":"<p><p>Fluorescence microscopy techniques have experienced a substantial increase in the visualization and analysis of many biological processes in life science. We describe a semiautomated and versatile tool called Cell-TypeAnalyzer to avoid the time-consuming and biased manual classification of cells according to cell types. It consists of an open-source plugin for Fiji or ImageJ to detect and classify cells in 2D images. Our workflow consists of (a) image preprocessing actions, data spatial calibration, and region of interest for analysis; (b) segmentation to isolate cells from background (optionally including user-defined preprocessing steps helping the identification of cells); (c) extraction of features from each cell; (d) filters to select relevant cells; (e) definition of specific criteria to be included in the different cell types; (f) cell classification; and (g) flexible analysis of the results. Our software provides a modular and flexible strategy to perform cell classification through a wizard-like graphical user interface in which the user is intuitively guided through each step of the analysis. This procedure may be applied in batch mode to multiple microscopy files. Once the analysis is set up, it can be automatically and efficiently performed on many images. The plugin does not require any programming skill and can analyze cells in many different acquisition setups.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e5"},"PeriodicalIF":0.0,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42889516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-17DOI: 10.1017/S2633903X22000046
Kamal L. Nahas, João Ferreira Fernandes, Nina Vyas, C. Crump, Stephen Graham, M. Harkiolaki
Abstract Cryo-soft-X-ray tomography is being increasingly used in biological research to study the morphology of cellular compartments and how they change in response to different stimuli, such as viral infections. Segmentation of these compartments is limited by time-consuming manual tools or machine learning algorithms that require extensive time and effort to train. Here we describe Contour, a new, easy-to-use, highly automated segmentation tool that enables accelerated segmentation of tomograms to delineate distinct cellular compartments. Using Contour, cellular structures can be segmented based on their projection intensity and geometrical width by applying a threshold range to the image and excluding noise smaller in width than the cellular compartments of interest. This method is less laborious and less prone to errors from human judgement than current tools that require features to be manually traced, and it does not require training datasets as would machine-learning driven segmentation. We show that high-contrast compartments such as mitochondria, lipid droplets, and features at the cell surface can be easily segmented with this technique in the context of investigating herpes simplex virus 1 infection. Contour can extract geometric measurements from 3D segmented volumes, providing a new method to quantitate cryo-soft-X-ray tomography data. Contour can be freely downloaded at github.com/kamallouisnahas/Contour.
{"title":"Contour: A semi-automated segmentation and quantitation tool for cryo-soft-X-ray tomography","authors":"Kamal L. Nahas, João Ferreira Fernandes, Nina Vyas, C. Crump, Stephen Graham, M. Harkiolaki","doi":"10.1017/S2633903X22000046","DOIUrl":"https://doi.org/10.1017/S2633903X22000046","url":null,"abstract":"Abstract Cryo-soft-X-ray tomography is being increasingly used in biological research to study the morphology of cellular compartments and how they change in response to different stimuli, such as viral infections. Segmentation of these compartments is limited by time-consuming manual tools or machine learning algorithms that require extensive time and effort to train. Here we describe Contour, a new, easy-to-use, highly automated segmentation tool that enables accelerated segmentation of tomograms to delineate distinct cellular compartments. Using Contour, cellular structures can be segmented based on their projection intensity and geometrical width by applying a threshold range to the image and excluding noise smaller in width than the cellular compartments of interest. This method is less laborious and less prone to errors from human judgement than current tools that require features to be manually traced, and it does not require training datasets as would machine-learning driven segmentation. We show that high-contrast compartments such as mitochondria, lipid droplets, and features at the cell surface can be easily segmented with this technique in the context of investigating herpes simplex virus 1 infection. Contour can extract geometric measurements from 3D segmented volumes, providing a new method to quantitate cryo-soft-X-ray tomography data. Contour can be freely downloaded at github.com/kamallouisnahas/Contour.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41931766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-22eCollection Date: 2022-01-01DOI: 10.1017/S2633903X22000022
Rituparna Sarkar, Daniel Darby, Sigolène Meilhac, Jean-Christophe Olivo-Marin
Advances in tissue engineering for cardiac regenerative medicine require cellular-level understanding of the mechanism of cardiac muscle growth during embryonic developmental stage. Computational methods to automatize cell segmentation in 3D and deliver accurate, quantitative morphology of cardiomyocytes, are imperative to provide insight into cell behavior underlying cardiac tissue growth. Detecting individual cells from volumetric images of dense tissue, poised with low signal-to-noise ratio and severe intensity in homogeneity, is a challenging task. In this article, we develop a robust segmentation tool capable of extracting cellular morphological parameters from 3D multifluorescence images of murine heart, captured via light-sheet microscopy. The proposed pipeline incorporates a neural network for 2D detection of nuclei and cell membranes. A graph-based global association employs the 2D nuclei detections to reconstruct 3D nuclei. A novel optimization embedding the network flow algorithm in an alternating direction method of multipliers is proposed to solve the global object association problem. The associated 3D nuclei serve as the initialization of an active mesh model to obtain the 3D segmentation of individual myocardial cells. The efficiency of our method over the state-of-the-art methods is observed via various qualitative and quantitative evaluation.
{"title":"3D cell morphology detection by association for embryo heart morphogenesis.","authors":"Rituparna Sarkar, Daniel Darby, Sigolène Meilhac, Jean-Christophe Olivo-Marin","doi":"10.1017/S2633903X22000022","DOIUrl":"10.1017/S2633903X22000022","url":null,"abstract":"<p><p>Advances in tissue engineering for cardiac regenerative medicine require cellular-level understanding of the mechanism of cardiac muscle growth during embryonic developmental stage. Computational methods to automatize cell segmentation in 3D and deliver accurate, quantitative morphology of cardiomyocytes, are imperative to provide insight into cell behavior underlying cardiac tissue growth. Detecting individual cells from volumetric images of dense tissue, poised with low signal-to-noise ratio and severe intensity in homogeneity, is a challenging task. In this article, we develop a robust segmentation tool capable of extracting cellular morphological parameters from 3D multifluorescence images of murine heart, captured via light-sheet microscopy. The proposed pipeline incorporates a neural network for 2D detection of nuclei and cell membranes. A graph-based global association employs the 2D nuclei detections to reconstruct 3D nuclei. A novel optimization embedding the network flow algorithm in an alternating direction method of multipliers is proposed to solve the global object association problem. The associated 3D nuclei serve as the initialization of an active mesh model to obtain the 3D segmentation of individual myocardial cells. The efficiency of our method over the state-of-the-art methods is observed via various qualitative and quantitative evaluation.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e2"},"PeriodicalIF":0.0,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47401934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19eCollection Date: 2022-01-01DOI: 10.1017/S2633903X22000034
Emmanuel Bouilhol, Anca F Savulescu, Edgar Lefevre, Benjamin Dartigues, Robyn Brackin, Macha Nikolski
Detection of RNA spots in single-molecule fluorescence in-situ hybridization microscopy images remains a difficult task, especially when applied to large volumes of data. The variable intensity of RNA spots combined with the high noise level of the images often requires manual adjustment of the spot detection thresholds for each image. In this work, we introduce DeepSpot, a Deep Learning-based tool specifically designed for RNA spot enhancement that enables spot detection without the need to resort to image per image parameter tuning. We show how our method can enable downstream accurate spot detection. DeepSpot's architecture is inspired by small object detection approaches. It incorporates dilated convolutions into a module specifically designed for context aggregation for small object and uses Residual Convolutions to propagate this information along the network. This enables DeepSpot to enhance all RNA spots to the same intensity, and thus circumvents the need for parameter tuning. We evaluated how easily spots can be detected in images enhanced with our method by testing DeepSpot on 20 simulated and 3 experimental datasets, and showed that accuracy of more than 97% is achieved. Moreover, comparison with alternative deep learning approaches for mRNA spot detection (deepBlink) indicated that DeepSpot provides more precise mRNA detection. In addition, we generated single-molecule fluorescence in-situ hybridization images of mouse fibroblasts in a wound healing assay to evaluate whether DeepSpot enhancement can enable seamless mRNA spot detection and thus streamline studies of localized mRNA expression in cells.
{"title":"DeepSpot: A deep neural network for RNA spot enhancement in single-molecule fluorescence in-situ hybridization microscopy images.","authors":"Emmanuel Bouilhol, Anca F Savulescu, Edgar Lefevre, Benjamin Dartigues, Robyn Brackin, Macha Nikolski","doi":"10.1017/S2633903X22000034","DOIUrl":"10.1017/S2633903X22000034","url":null,"abstract":"<p><p>Detection of RNA spots in single-molecule fluorescence in-situ hybridization microscopy images remains a difficult task, especially when applied to large volumes of data. The variable intensity of RNA spots combined with the high noise level of the images often requires manual adjustment of the spot detection thresholds for each image. In this work, we introduce DeepSpot, a Deep Learning-based tool specifically designed for RNA spot enhancement that enables spot detection without the need to resort to image per image parameter tuning. We show how our method can enable downstream accurate spot detection. DeepSpot's architecture is inspired by small object detection approaches. It incorporates dilated convolutions into a module specifically designed for context aggregation for small object and uses Residual Convolutions to propagate this information along the network. This enables DeepSpot to enhance all RNA spots to the same intensity, and thus circumvents the need for parameter tuning. We evaluated how easily spots can be detected in images enhanced with our method by testing DeepSpot on 20 simulated and 3 experimental datasets, and showed that accuracy of more than 97% is achieved. Moreover, comparison with alternative deep learning approaches for mRNA spot detection (deepBlink) indicated that DeepSpot provides more precise mRNA detection. In addition, we generated single-molecule fluorescence in-situ hybridization images of mouse fibroblasts in a wound healing assay to evaluate whether DeepSpot enhancement can enable seamless mRNA spot detection and thus streamline studies of localized mRNA expression in cells.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e4"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951802/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42762949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To overcome the physical barriers caused by light diffraction, super-resolution techniques are often applied in fluorescence microscopy. State-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations, as sufficiently high spatio-temporal resolution for live-cell imaging can be achieved using common microscopes and conventional fluorescent dyes. Based on this idea, we present COL0RME, a method for covariance-based super-resolution microscopy with intensity estimation, which achieves good spatio-temporal resolution by solving a sparse optimization problem in the covariance domain and discuss automatic parameter selection strategies. The method is composed of two steps: the former where both the emitters' independence and the sparse distribution of the fluorescent molecules are exploited to provide an accurate localization; the latter where real intensity values are estimated given the computed support. The paper is furnished with several numerical results both on synthetic and real fluorescence microscopy images and several comparisons with state-of-the art approaches are provided. Our results show that COL0RME outperforms competing methods exploiting analogously temporal fluctuations; in particular, it achieves better localization, reduces background artifacts, and avoids fine parameter tuning.
{"title":"COL0RME: Super-resolution microscopy based on sparse blinking/fluctuating fluorophore localization and intensity estimation.","authors":"Vasiliki Stergiopoulou, Luca Calatroni, Henrique de Morais Goulart, Sébastien Schaub, Laure Blanc-Féraud","doi":"10.1017/S2633903X22000010","DOIUrl":"10.1017/S2633903X22000010","url":null,"abstract":"<p><p>To overcome the physical barriers caused by light diffraction, super-resolution techniques are often applied in fluorescence microscopy. State-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations, as sufficiently high spatio-temporal resolution for live-cell imaging can be achieved using common microscopes and conventional fluorescent dyes. Based on this idea, we present COL0RME, a method for covariance-based super-resolution microscopy with intensity estimation, which achieves good spatio-temporal resolution by solving a sparse optimization problem in the covariance domain and discuss automatic parameter selection strategies. The method is composed of two steps: the former where both the emitters' independence and the sparse distribution of the fluorescent molecules are exploited to provide an accurate localization; the latter where real intensity values are estimated given the computed support. The paper is furnished with several numerical results both on synthetic and real fluorescence microscopy images and several comparisons with state-of-the art approaches are provided. Our results show that COL0RME outperforms competing methods exploiting analogously temporal fluctuations; in particular, it achieves better localization, reduces background artifacts, and avoids fine parameter tuning.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e1"},"PeriodicalIF":0.0,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951805/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46297142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2022-08-30DOI: 10.1017/s2633903x22000083
Samuel Bowerman, Jyothi Mahadevan, Philip Benson, Johannes Rudolph, Karolin Luger
Eukaryotic cells are constantly subject to DNA damage, often with detrimental consequences for the health of the organism. Cells mitigate this DNA damage through a variety of repair pathways involving a diverse and large number of different proteins. To better understand the cellular response to DNA damage, one needs accurate measurements of the accumulation, retention, and dissipation timescales of these repair proteins. Here, we describe an automated implementation of the "quantitation of fluorescence accumulation after DNA damage" method that greatly enhances the analysis and quantitation of the widely used technique known as laser microirradiation, which is used to study the recruitment of DNA repair proteins to sites of DNA damage. This open-source implementation ("qFADD.py") is available as a stand-alone software package that can be run on laptops or computer clusters. Our implementation includes corrections for nuclear drift, an automated grid search for the model of a best fit, and the ability to model both horizontal striping and speckle experiments. To improve statistical rigor, the grid-search algorithm also includes automated simulation of replicates. As a practical example, we present and discuss the recruitment dynamics of the early responder PARP1 to DNA damage sites.
{"title":"Automated modeling of protein accumulation at DNA damage sites using qFADD.py.","authors":"Samuel Bowerman, Jyothi Mahadevan, Philip Benson, Johannes Rudolph, Karolin Luger","doi":"10.1017/s2633903x22000083","DOIUrl":"https://doi.org/10.1017/s2633903x22000083","url":null,"abstract":"<p><p>Eukaryotic cells are constantly subject to DNA damage, often with detrimental consequences for the health of the organism. Cells mitigate this DNA damage through a variety of repair pathways involving a diverse and large number of different proteins. To better understand the cellular response to DNA damage, one needs accurate measurements of the accumulation, retention, and dissipation timescales of these repair proteins. Here, we describe an automated implementation of the \"quantitation of fluorescence accumulation after DNA damage\" method that greatly enhances the analysis and quantitation of the widely used technique known as laser microirradiation, which is used to study the recruitment of DNA repair proteins to sites of DNA damage. This open-source implementation (\"qFADD.py\") is available as a stand-alone software package that can be run on laptops or computer clusters. Our implementation includes corrections for nuclear drift, an automated grid search for the model of a best fit, and the ability to model both horizontal striping and speckle experiments. To improve statistical rigor, the grid-search algorithm also includes automated simulation of replicates. As a practical example, we present and discuss the recruitment dynamics of the early responder PARP1 to DNA damage sites.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9683346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40494177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-24eCollection Date: 2021-01-01DOI: 10.1017/S2633903X21000027
Bruno M Saraiva, Ludwig Krippahl, Sérgio R Filipe, Ricardo Henriques, Mariana G Pinho
Fluorescence microscopy is a critical tool for cell biology studies on bacterial cell division and morphogenesis. Because the analysis of fluorescence microscopy images evolved beyond initial qualitative studies, numerous images analysis tools were developed to extract quantitative parameters on cell morphology and organization. To understand cellular processes required for bacterial growth and division, it is particularly important to perform such analysis in the context of cell cycle progression. However, manual assignment of cell cycle stages is laborious and prone to user bias. Although cell elongation can be used as a proxy for cell cycle progression in rod-shaped or ovoid bacteria, that is not the case for cocci, such as Staphylococcus aureus. Here, we describe eHooke, an image analysis framework developed specifically for automated analysis of microscopy images of spherical bacterial cells. eHooke contains a trained artificial neural network to automatically classify the cell cycle phase of individual S. aureus cells. Users can then apply various functions to obtain biologically relevant information on morphological features of individual cells and cellular localization of proteins, in the context of the cell cycle.
{"title":"eHooke: A tool for automated image analysis of spherical bacteria based on cell cycle progression.","authors":"Bruno M Saraiva, Ludwig Krippahl, Sérgio R Filipe, Ricardo Henriques, Mariana G Pinho","doi":"10.1017/S2633903X21000027","DOIUrl":"10.1017/S2633903X21000027","url":null,"abstract":"<p><p>Fluorescence microscopy is a critical tool for cell biology studies on bacterial cell division and morphogenesis. Because the analysis of fluorescence microscopy images evolved beyond initial qualitative studies, numerous images analysis tools were developed to extract quantitative parameters on cell morphology and organization. To understand cellular processes required for bacterial growth and division, it is particularly important to perform such analysis in the context of cell cycle progression. However, manual assignment of cell cycle stages is laborious and prone to user bias. Although cell elongation can be used as a proxy for cell cycle progression in rod-shaped or ovoid bacteria, that is not the case for cocci, such as <i>Staphylococcus aureus.</i> Here, we describe eHooke, an image analysis framework developed specifically for automated analysis of microscopy images of spherical bacterial cells. eHooke contains a trained artificial neural network to automatically classify the cell cycle phase of individual <i>S. aureus</i> cells. Users can then apply various functions to obtain biologically relevant information on morphological features of individual cells and cellular localization of proteins, in the context of the cell cycle.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"1 ","pages":"e3"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/7c/41/S2633903X21000027a.PMC8724265.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39940075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-17DOI: 10.1017/S2633903X21000039
Nasim Jamali, E. T. Dobson, K. Eliceiri, Anne E Carpenter, B. Cimini
In this paper, we summarize a global survey of 484 participants of the imaging community, conducted in 2020 through the NIH Center for Open BioImage Analysis (COBA). This 23-question survey covered experience with image analysis, scientific background and demographics, and views and requests from different members of the imaging community. Through open-ended questions we asked the community to provide feedback for the opensource tool developers and tool user groups. The community’s requests for tool developers include general improvement of tool documentation and easy-to-follow tutorials. Respondents encourage tool users to follow the best practices guidelines for imaging and ask their image analysis questions on the Scientific Community Image forum (forum.image.sc). We analyzed the community’s preferred method of learning, based on level of computational proficiency and work description. In general, written step-by-step and video tutorials are preferred methods of learning by the community, followed by interactive webinars and office hours with an expert. There is also enthusiasm for a centralized location online for existing educational resources. The survey results will help the community, especially developers, trainers, and organizations like COBA, decide how to structure and prioritize their efforts. Impact statement The Bioimage analysis community consists of software developers, imaging experts, and users, all with different expertise, scientific background, and computational skill levels. The NIH funded Center for Open Bioimage Analysis (COBA) was launched in 2020 to serve the cell biology community’s growing need for sophisticated open-source software and workflows for light microscopy image analysis. This paper shares the result of a COBA survey to assess the most urgent ongoing needs for software and training in the community and provide a helpful resource for software developers working in this domain. Here, we describe the state of open-source bioimage analysis, developers’ and users’ requests from the community, and our resulting view of common goals that would serve and strengthen the community to advance imaging science.
{"title":"2020 BioImage Analysis Survey: Community experiences and needs for the future","authors":"Nasim Jamali, E. T. Dobson, K. Eliceiri, Anne E Carpenter, B. Cimini","doi":"10.1017/S2633903X21000039","DOIUrl":"https://doi.org/10.1017/S2633903X21000039","url":null,"abstract":"In this paper, we summarize a global survey of 484 participants of the imaging community, conducted in 2020 through the NIH Center for Open BioImage Analysis (COBA). This 23-question survey covered experience with image analysis, scientific background and demographics, and views and requests from different members of the imaging community. Through open-ended questions we asked the community to provide feedback for the opensource tool developers and tool user groups. The community’s requests for tool developers include general improvement of tool documentation and easy-to-follow tutorials. Respondents encourage tool users to follow the best practices guidelines for imaging and ask their image analysis questions on the Scientific Community Image forum (forum.image.sc). We analyzed the community’s preferred method of learning, based on level of computational proficiency and work description. In general, written step-by-step and video tutorials are preferred methods of learning by the community, followed by interactive webinars and office hours with an expert. There is also enthusiasm for a centralized location online for existing educational resources. The survey results will help the community, especially developers, trainers, and organizations like COBA, decide how to structure and prioritize their efforts. Impact statement The Bioimage analysis community consists of software developers, imaging experts, and users, all with different expertise, scientific background, and computational skill levels. The NIH funded Center for Open Bioimage Analysis (COBA) was launched in 2020 to serve the cell biology community’s growing need for sophisticated open-source software and workflows for light microscopy image analysis. This paper shares the result of a COBA survey to assess the most urgent ongoing needs for software and training in the community and provide a helpful resource for software developers working in this domain. Here, we describe the state of open-source bioimage analysis, developers’ and users’ requests from the community, and our resulting view of common goals that would serve and strengthen the community to advance imaging science.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47053216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-02eCollection Date: 2021-01-01DOI: 10.1017/S2633903X21000015
Mira S Davidson, Clare Andradi-Brown, Sabrina Yahiya, Jill Chmielewski, Aidan J O'Donnell, Pratima Gurung, Myriam D Jeninga, Parichat Prommana, Dean W Andrew, Michaela Petter, Chairat Uthaipibull, Michelle J Boyle, George W Ashdown, Jeffrey D Dvorin, Sarah E Reece, Danny W Wilson, Kane A Cunningham, D Michael Ando, Michelle Dimon, Jake Baum
Microscopic examination of blood smears remains the gold standard for laboratory inspection and diagnosis of malaria. Smear inspection is, however, time-consuming and dependent on trained microscopists with results varying in accuracy. We sought to develop an automated image analysis method to improve accuracy and standardization of smear inspection that retains capacity for expert confirmation and image archiving. Here, we present a machine learning method that achieves red blood cell (RBC) detection, differentiation between infected/uninfected cells, and parasite life stage categorization from unprocessed, heterogeneous smear images. Based on a pretrained Faster Region-Based Convolutional Neural Networks (R-CNN) model for RBC detection, our model performs accurately, with an average precision of 0.99 at an intersection-over-union threshold of 0.5. Application of a residual neural network-50 model to infected cells also performs accurately, with an area under the receiver operating characteristic curve of 0.98. Finally, combining our method with a regression model successfully recapitulates intraerythrocytic developmental cycle with accurate lifecycle stage categorization. Combined with a mobile-friendly web-based interface, called PlasmoCount, our method permits rapid navigation through and review of results for quality assurance. By standardizing assessment of Giemsa smears, our method markedly improves inspection reproducibility and presents a realistic route to both routine lab and future field-based automated malaria diagnosis.
{"title":"Automated detection and staging of malaria parasites from cytological smears using convolutional neural networks.","authors":"Mira S Davidson, Clare Andradi-Brown, Sabrina Yahiya, Jill Chmielewski, Aidan J O'Donnell, Pratima Gurung, Myriam D Jeninga, Parichat Prommana, Dean W Andrew, Michaela Petter, Chairat Uthaipibull, Michelle J Boyle, George W Ashdown, Jeffrey D Dvorin, Sarah E Reece, Danny W Wilson, Kane A Cunningham, D Michael Ando, Michelle Dimon, Jake Baum","doi":"10.1017/S2633903X21000015","DOIUrl":"https://doi.org/10.1017/S2633903X21000015","url":null,"abstract":"<p><p>Microscopic examination of blood smears remains the gold standard for laboratory inspection and diagnosis of malaria. Smear inspection is, however, time-consuming and dependent on trained microscopists with results varying in accuracy. We sought to develop an automated image analysis method to improve accuracy and standardization of smear inspection that retains capacity for expert confirmation and image archiving. Here, we present a machine learning method that achieves red blood cell (RBC) detection, differentiation between infected/uninfected cells, and parasite life stage categorization from unprocessed, heterogeneous smear images. Based on a pretrained Faster Region-Based Convolutional Neural Networks (R-CNN) model for RBC detection, our model performs accurately, with an average precision of 0.99 at an intersection-over-union threshold of 0.5. Application of a residual neural network-50 model to infected cells also performs accurately, with an area under the receiver operating characteristic curve of 0.98. Finally, combining our method with a regression model successfully recapitulates intraerythrocytic developmental cycle with accurate lifecycle stage categorization. Combined with a mobile-friendly web-based interface, called PlasmoCount, our method permits rapid navigation through and review of results for quality assurance. By standardizing assessment of Giemsa smears, our method markedly improves inspection reproducibility and presents a realistic route to both routine lab and future field-based automated malaria diagnosis.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"1 ","pages":"e2"},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8724263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39940074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}