首页 > 最新文献

Biological imaging最新文献

英文 中文
Cell-TypeAnalyzer: A flexible Fiji/ImageJ plugin to classify cells according to user-defined criteria. Cell TypeAnalyzer:一个灵活的Fiji/ImageJ插件,可根据用户定义的标准对细胞进行分类
Pub Date : 2022-05-20 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000058
Ana Cayuela López, José A Gómez-Pedrero, Ana M O Blanco, Carlos Oscar S Sorzano

Fluorescence microscopy techniques have experienced a substantial increase in the visualization and analysis of many biological processes in life science. We describe a semiautomated and versatile tool called Cell-TypeAnalyzer to avoid the time-consuming and biased manual classification of cells according to cell types. It consists of an open-source plugin for Fiji or ImageJ to detect and classify cells in 2D images. Our workflow consists of (a) image preprocessing actions, data spatial calibration, and region of interest for analysis; (b) segmentation to isolate cells from background (optionally including user-defined preprocessing steps helping the identification of cells); (c) extraction of features from each cell; (d) filters to select relevant cells; (e) definition of specific criteria to be included in the different cell types; (f) cell classification; and (g) flexible analysis of the results. Our software provides a modular and flexible strategy to perform cell classification through a wizard-like graphical user interface in which the user is intuitively guided through each step of the analysis. This procedure may be applied in batch mode to multiple microscopy files. Once the analysis is set up, it can be automatically and efficiently performed on many images. The plugin does not require any programming skill and can analyze cells in many different acquisition setups.

摘要荧光显微镜技术在生命科学中许多生物过程的可视化和分析方面有了长足的发展。我们描述了一种称为细胞类型分析仪的半自动多功能工具,以避免根据细胞类型对细胞进行耗时且有偏见的手动分类。它由Fiji或ImageJ的开源插件组成,用于检测和分类2D图像中的细胞。我们的工作流程包括(a)图像预处理操作、数据空间校准和用于分析的感兴趣区域;(b) 将细胞与背景分离的分割(可选地包括帮助识别细胞的用户定义的预处理步骤);(c) 从每个单元提取特征;(d) 过滤器以选择相关单元格;(e) 定义要包括在不同单元格类型中的特定标准;(f) 细胞分类;以及(g)对结果进行灵活分析。我们的软件提供了一种模块化和灵活的策略,通过类似向导的图形用户界面来执行细胞分类,在该界面中,用户可以直观地完成分析的每个步骤。此程序可以批量模式应用于多个显微镜文件。一旦建立了分析,就可以在许多图像上自动有效地执行分析。该插件不需要任何编程技能,可以在许多不同的采集设置中分析细胞。
{"title":"Cell-TypeAnalyzer: A flexible Fiji/ImageJ plugin to classify cells according to user-defined criteria.","authors":"Ana Cayuela López, José A Gómez-Pedrero, Ana M O Blanco, Carlos Oscar S Sorzano","doi":"10.1017/S2633903X22000058","DOIUrl":"10.1017/S2633903X22000058","url":null,"abstract":"<p><p>Fluorescence microscopy techniques have experienced a substantial increase in the visualization and analysis of many biological processes in life science. We describe a semiautomated and versatile tool called Cell-TypeAnalyzer to avoid the time-consuming and biased manual classification of cells according to cell types. It consists of an open-source plugin for Fiji or ImageJ to detect and classify cells in 2D images. Our workflow consists of (a) image preprocessing actions, data spatial calibration, and region of interest for analysis; (b) segmentation to isolate cells from background (optionally including user-defined preprocessing steps helping the identification of cells); (c) extraction of features from each cell; (d) filters to select relevant cells; (e) definition of specific criteria to be included in the different cell types; (f) cell classification; and (g) flexible analysis of the results. Our software provides a modular and flexible strategy to perform cell classification through a wizard-like graphical user interface in which the user is intuitively guided through each step of the analysis. This procedure may be applied in batch mode to multiple microscopy files. Once the analysis is set up, it can be automatically and efficiently performed on many images. The plugin does not require any programming skill and can analyze cells in many different acquisition setups.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e5"},"PeriodicalIF":0.0,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42889516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contour: A semi-automated segmentation and quantitation tool for cryo-soft-X-ray tomography Contour:一种用于冷冻X射线断层扫描的半自动分割和定量工具
Pub Date : 2022-05-17 DOI: 10.1017/S2633903X22000046
Kamal L. Nahas, João Ferreira Fernandes, Nina Vyas, C. Crump, Stephen Graham, M. Harkiolaki
Abstract Cryo-soft-X-ray tomography is being increasingly used in biological research to study the morphology of cellular compartments and how they change in response to different stimuli, such as viral infections. Segmentation of these compartments is limited by time-consuming manual tools or machine learning algorithms that require extensive time and effort to train. Here we describe Contour, a new, easy-to-use, highly automated segmentation tool that enables accelerated segmentation of tomograms to delineate distinct cellular compartments. Using Contour, cellular structures can be segmented based on their projection intensity and geometrical width by applying a threshold range to the image and excluding noise smaller in width than the cellular compartments of interest. This method is less laborious and less prone to errors from human judgement than current tools that require features to be manually traced, and it does not require training datasets as would machine-learning driven segmentation. We show that high-contrast compartments such as mitochondria, lipid droplets, and features at the cell surface can be easily segmented with this technique in the context of investigating herpes simplex virus 1 infection. Contour can extract geometric measurements from 3D segmented volumes, providing a new method to quantitate cryo-soft-X-ray tomography data. Contour can be freely downloaded at github.com/kamallouisnahas/Contour.
摘要X射线冷冻断层扫描正越来越多地用于生物学研究,以研究细胞隔室的形态以及它们如何在不同刺激(如病毒感染)下发生变化。这些隔间的分割受到耗时的手动工具或机器学习算法的限制,这些工具或算法需要大量的时间和精力来训练。在这里,我们描述了Contour,这是一种新的、易于使用的、高度自动化的分割工具,能够加速断层图像的分割,以描绘不同的细胞隔室。使用Contour,可以通过将阈值范围应用于图像并排除宽度小于感兴趣的细胞隔室的噪声,基于细胞结构的投影强度和几何宽度来分割细胞结构。与当前需要手动跟踪特征的工具相比,这种方法不那么费力,也不容易出现人为判断的错误,而且它不需要像机器学习驱动的分割那样训练数据集。我们表明,在研究单纯疱疹病毒1型感染的背景下,使用该技术可以很容易地分割线粒体、脂滴和细胞表面特征等高对比度区室。Contour可以从3D分割体积中提取几何测量值,为定量X射线冷冻断层扫描数据提供了一种新方法。Contour可以在github.com/kamallouisnahas/Contour上免费下载。
{"title":"Contour: A semi-automated segmentation and quantitation tool for cryo-soft-X-ray tomography","authors":"Kamal L. Nahas, João Ferreira Fernandes, Nina Vyas, C. Crump, Stephen Graham, M. Harkiolaki","doi":"10.1017/S2633903X22000046","DOIUrl":"https://doi.org/10.1017/S2633903X22000046","url":null,"abstract":"Abstract Cryo-soft-X-ray tomography is being increasingly used in biological research to study the morphology of cellular compartments and how they change in response to different stimuli, such as viral infections. Segmentation of these compartments is limited by time-consuming manual tools or machine learning algorithms that require extensive time and effort to train. Here we describe Contour, a new, easy-to-use, highly automated segmentation tool that enables accelerated segmentation of tomograms to delineate distinct cellular compartments. Using Contour, cellular structures can be segmented based on their projection intensity and geometrical width by applying a threshold range to the image and excluding noise smaller in width than the cellular compartments of interest. This method is less laborious and less prone to errors from human judgement than current tools that require features to be manually traced, and it does not require training datasets as would machine-learning driven segmentation. We show that high-contrast compartments such as mitochondria, lipid droplets, and features at the cell surface can be easily segmented with this technique in the context of investigating herpes simplex virus 1 infection. Contour can extract geometric measurements from 3D segmented volumes, providing a new method to quantitate cryo-soft-X-ray tomography data. Contour can be freely downloaded at github.com/kamallouisnahas/Contour.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41931766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D cell morphology detection by association for embryo heart morphogenesis. 胚胎心脏形态发生的三维细胞形态学关联检测
Pub Date : 2022-04-22 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000022
Rituparna Sarkar, Daniel Darby, Sigolène Meilhac, Jean-Christophe Olivo-Marin

Advances in tissue engineering for cardiac regenerative medicine require cellular-level understanding of the mechanism of cardiac muscle growth during embryonic developmental stage. Computational methods to automatize cell segmentation in 3D and deliver accurate, quantitative morphology of cardiomyocytes, are imperative to provide insight into cell behavior underlying cardiac tissue growth. Detecting individual cells from volumetric images of dense tissue, poised with low signal-to-noise ratio and severe intensity in homogeneity, is a challenging task. In this article, we develop a robust segmentation tool capable of extracting cellular morphological parameters from 3D multifluorescence images of murine heart, captured via light-sheet microscopy. The proposed pipeline incorporates a neural network for 2D detection of nuclei and cell membranes. A graph-based global association employs the 2D nuclei detections to reconstruct 3D nuclei. A novel optimization embedding the network flow algorithm in an alternating direction method of multipliers is proposed to solve the global object association problem. The associated 3D nuclei serve as the initialization of an active mesh model to obtain the 3D segmentation of individual myocardial cells. The efficiency of our method over the state-of-the-art methods is observed via various qualitative and quantitative evaluation.

摘要心脏再生医学组织工程的进展需要细胞水平上了解胚胎发育阶段心肌生长的机制。实现3D细胞分割自动化并提供准确、定量的心肌细胞形态的计算方法,对于深入了解心脏组织生长背后的细胞行为至关重要。从致密组织的体积图像中检测单个细胞是一项具有挑战性的任务,该图像具有低信噪比和严重的同质性。在这篇文章中,我们开发了一种强大的分割工具,能够从通过光片显微镜捕获的小鼠心脏的3D多荧光图像中提取细胞形态参数。所提出的管道结合了用于细胞核和细胞膜的2D检测的神经网络。基于图的全局关联采用2D核检测来重建3D核。为了解决全局对象关联问题,提出了一种新的优化方法,将网络流算法嵌入到乘法器的交替方向方法中。相关联的3D细胞核用作主动网格模型的初始化,以获得单个心肌细胞的3D分割。通过各种定性和定量评估,观察到我们的方法相对于最先进方法的效率。
{"title":"3D cell morphology detection by association for embryo heart morphogenesis.","authors":"Rituparna Sarkar, Daniel Darby, Sigolène Meilhac, Jean-Christophe Olivo-Marin","doi":"10.1017/S2633903X22000022","DOIUrl":"10.1017/S2633903X22000022","url":null,"abstract":"<p><p>Advances in tissue engineering for cardiac regenerative medicine require cellular-level understanding of the mechanism of cardiac muscle growth during embryonic developmental stage. Computational methods to automatize cell segmentation in 3D and deliver accurate, quantitative morphology of cardiomyocytes, are imperative to provide insight into cell behavior underlying cardiac tissue growth. Detecting individual cells from volumetric images of dense tissue, poised with low signal-to-noise ratio and severe intensity in homogeneity, is a challenging task. In this article, we develop a robust segmentation tool capable of extracting cellular morphological parameters from 3D multifluorescence images of murine heart, captured via light-sheet microscopy. The proposed pipeline incorporates a neural network for 2D detection of nuclei and cell membranes. A graph-based global association employs the 2D nuclei detections to reconstruct 3D nuclei. A novel optimization embedding the network flow algorithm in an alternating direction method of multipliers is proposed to solve the global object association problem. The associated 3D nuclei serve as the initialization of an active mesh model to obtain the 3D segmentation of individual myocardial cells. The efficiency of our method over the state-of-the-art methods is observed via various qualitative and quantitative evaluation.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e2"},"PeriodicalIF":0.0,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47401934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepSpot: A deep neural network for RNA spot enhancement in single-molecule fluorescence in-situ hybridization microscopy images. DeepSpot:一种用于单分子荧光原位杂交显微镜图像中RNA斑点增强的深度神经网络
Pub Date : 2022-04-19 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000034
Emmanuel Bouilhol, Anca F Savulescu, Edgar Lefevre, Benjamin Dartigues, Robyn Brackin, Macha Nikolski

Detection of RNA spots in single-molecule fluorescence in-situ hybridization microscopy images remains a difficult task, especially when applied to large volumes of data. The variable intensity of RNA spots combined with the high noise level of the images often requires manual adjustment of the spot detection thresholds for each image. In this work, we introduce DeepSpot, a Deep Learning-based tool specifically designed for RNA spot enhancement that enables spot detection without the need to resort to image per image parameter tuning. We show how our method can enable downstream accurate spot detection. DeepSpot's architecture is inspired by small object detection approaches. It incorporates dilated convolutions into a module specifically designed for context aggregation for small object and uses Residual Convolutions to propagate this information along the network. This enables DeepSpot to enhance all RNA spots to the same intensity, and thus circumvents the need for parameter tuning. We evaluated how easily spots can be detected in images enhanced with our method by testing DeepSpot on 20 simulated and 3 experimental datasets, and showed that accuracy of more than 97% is achieved. Moreover, comparison with alternative deep learning approaches for mRNA spot detection (deepBlink) indicated that DeepSpot provides more precise mRNA detection. In addition, we generated single-molecule fluorescence in-situ hybridization images of mouse fibroblasts in a wound healing assay to evaluate whether DeepSpot enhancement can enable seamless mRNA spot detection and thus streamline studies of localized mRNA expression in cells.

摘要在单分子荧光原位杂交显微镜图像中检测RNA斑点仍然是一项艰巨的任务,尤其是在应用于大量数据时。RNA斑点的可变强度与图像的高噪声水平相结合通常需要手动调整每个图像的斑点检测阈值。在这项工作中,我们介绍了DeepSpot,这是一种专门为RNA斑点增强设计的基于深度学习的工具,无需对每个图像的参数进行调整即可实现斑点检测。我们展示了我们的方法如何能够实现下游精确的斑点检测。DeepSpot的架构受到小物体检测方法的启发。它将扩展卷积合并到一个专门为小对象上下文聚合设计的模块中,并使用残差卷积沿网络传播这些信息。这使得DeepSpot能够将所有RNA斑点增强到相同的强度,从而避免了参数调整的需要。我们通过在20个模拟数据集和3个实验数据集上测试DeepSpot,评估了在用我们的方法增强的图像中检测斑点的容易程度,结果表明,准确率超过97%。此外,与用于mRNA斑点检测的替代深度学习方法(deepBlink)的比较表明,DeepSpot提供了更精确的mRNA检测。此外,我们在伤口愈合试验中生成了小鼠成纤维细胞的单分子荧光原位杂交图像,以评估DeepSpot增强是否能够实现无缝的mRNA斑点检测,从而简化细胞中定位mRNA表达的研究。
{"title":"DeepSpot: A deep neural network for RNA spot enhancement in single-molecule fluorescence in-situ hybridization microscopy images.","authors":"Emmanuel Bouilhol, Anca F Savulescu, Edgar Lefevre, Benjamin Dartigues, Robyn Brackin, Macha Nikolski","doi":"10.1017/S2633903X22000034","DOIUrl":"10.1017/S2633903X22000034","url":null,"abstract":"<p><p>Detection of RNA spots in single-molecule fluorescence in-situ hybridization microscopy images remains a difficult task, especially when applied to large volumes of data. The variable intensity of RNA spots combined with the high noise level of the images often requires manual adjustment of the spot detection thresholds for each image. In this work, we introduce DeepSpot, a Deep Learning-based tool specifically designed for RNA spot enhancement that enables spot detection without the need to resort to image per image parameter tuning. We show how our method can enable downstream accurate spot detection. DeepSpot's architecture is inspired by small object detection approaches. It incorporates dilated convolutions into a module specifically designed for context aggregation for small object and uses Residual Convolutions to propagate this information along the network. This enables DeepSpot to enhance all RNA spots to the same intensity, and thus circumvents the need for parameter tuning. We evaluated how easily spots can be detected in images enhanced with our method by testing DeepSpot on 20 simulated and 3 experimental datasets, and showed that accuracy of more than 97% is achieved. Moreover, comparison with alternative deep learning approaches for mRNA spot detection (deepBlink) indicated that DeepSpot provides more precise mRNA detection. In addition, we generated single-molecule fluorescence in-situ hybridization images of mouse fibroblasts in a wound healing assay to evaluate whether DeepSpot enhancement can enable seamless mRNA spot detection and thus streamline studies of localized mRNA expression in cells.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e4"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951802/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42762949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COL0RME: Super-resolution microscopy based on sparse blinking/fluctuating fluorophore localization and intensity estimation. COL0RME:基于稀疏闪烁/波动荧光团定位和强度估计的超分辨率显微镜
Pub Date : 2022-02-16 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000010
Vasiliki Stergiopoulou, Luca Calatroni, Henrique de Morais Goulart, Sébastien Schaub, Laure Blanc-Féraud

To overcome the physical barriers caused by light diffraction, super-resolution techniques are often applied in fluorescence microscopy. State-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations, as sufficiently high spatio-temporal resolution for live-cell imaging can be achieved using common microscopes and conventional fluorescent dyes. Based on this idea, we present COL0RME, a method for covariance-based super-resolution microscopy with intensity estimation, which achieves good spatio-temporal resolution by solving a sparse optimization problem in the covariance domain and discuss automatic parameter selection strategies. The method is composed of two steps: the former where both the emitters' independence and the sparse distribution of the fluorescent molecules are exploited to provide an accurate localization; the latter where real intensity values are estimated given the computed support. The paper is furnished with several numerical results both on synthetic and real fluorescence microscopy images and several comparisons with state-of-the art approaches are provided. Our results show that COL0RME outperforms competing methods exploiting analogously temporal fluctuations; in particular, it achieves better localization, reduces background artifacts, and avoids fine parameter tuning.

摘要为了克服光衍射引起的物理障碍,超分辨率技术经常应用于荧光显微镜。现有技术的方法需要特定且经常要求苛刻的采集条件,以实现足够的空间和时间分辨率。分析荧光分子的随机波动为上述限制提供了解决方案,因为使用普通显微镜和常规荧光染料可以实现足够高的活细胞成像时空分辨率。基于这一思想,我们提出了COL0RME,这是一种基于协方差的具有强度估计的${mathrm{ell}}_0$超分辨率显微镜方法,通过解决协方差域中的稀疏优化问题来实现良好的时空分辨率,并讨论了参数的自动选择策略。该方法由两个步骤组成:前者利用发射器的独立性和荧光分子的稀疏分布来提供准确的定位;后者在给定计算的支持的情况下估计真实强度值。本文提供了合成和真实荧光显微镜图像的几个数值结果,并与现有技术进行了一些比较。我们的结果表明,COL0RME优于利用类似时间波动的竞争方法;特别地,它实现了更好的定位,减少了背景伪影,并避免了参数微调。
{"title":"COL0RME: Super-resolution microscopy based on sparse blinking/fluctuating fluorophore localization and intensity estimation.","authors":"Vasiliki Stergiopoulou, Luca Calatroni, Henrique de Morais Goulart, Sébastien Schaub, Laure Blanc-Féraud","doi":"10.1017/S2633903X22000010","DOIUrl":"10.1017/S2633903X22000010","url":null,"abstract":"<p><p>To overcome the physical barriers caused by light diffraction, super-resolution techniques are often applied in fluorescence microscopy. State-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations, as sufficiently high spatio-temporal resolution for live-cell imaging can be achieved using common microscopes and conventional fluorescent dyes. Based on this idea, we present COL0RME, a method for covariance-based super-resolution microscopy with intensity estimation, which achieves good spatio-temporal resolution by solving a sparse optimization problem in the covariance domain and discuss automatic parameter selection strategies. The method is composed of two steps: the former where both the emitters' independence and the sparse distribution of the fluorescent molecules are exploited to provide an accurate localization; the latter where real intensity values are estimated given the computed support. The paper is furnished with several numerical results both on synthetic and real fluorescence microscopy images and several comparisons with state-of-the art approaches are provided. Our results show that COL0RME outperforms competing methods exploiting analogously temporal fluctuations; in particular, it achieves better localization, reduces background artifacts, and avoids fine parameter tuning.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":"e1"},"PeriodicalIF":0.0,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951805/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46297142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated modeling of protein accumulation at DNA damage sites using qFADD.py. 使用qFADD.py自动建模DNA损伤位点的蛋白质积累。
Pub Date : 2022-01-01 Epub Date: 2022-08-30 DOI: 10.1017/s2633903x22000083
Samuel Bowerman, Jyothi Mahadevan, Philip Benson, Johannes Rudolph, Karolin Luger

Eukaryotic cells are constantly subject to DNA damage, often with detrimental consequences for the health of the organism. Cells mitigate this DNA damage through a variety of repair pathways involving a diverse and large number of different proteins. To better understand the cellular response to DNA damage, one needs accurate measurements of the accumulation, retention, and dissipation timescales of these repair proteins. Here, we describe an automated implementation of the "quantitation of fluorescence accumulation after DNA damage" method that greatly enhances the analysis and quantitation of the widely used technique known as laser microirradiation, which is used to study the recruitment of DNA repair proteins to sites of DNA damage. This open-source implementation ("qFADD.py") is available as a stand-alone software package that can be run on laptops or computer clusters. Our implementation includes corrections for nuclear drift, an automated grid search for the model of a best fit, and the ability to model both horizontal striping and speckle experiments. To improve statistical rigor, the grid-search algorithm also includes automated simulation of replicates. As a practical example, we present and discuss the recruitment dynamics of the early responder PARP1 to DNA damage sites.

真核细胞经常受到DNA损伤,常常对生物体的健康造成有害的后果。细胞通过多种修复途径来减轻这种DNA损伤,这些修复途径涉及多种和大量不同的蛋白质。为了更好地理解细胞对DNA损伤的反应,人们需要精确测量这些修复蛋白的积累、保留和耗散时间尺度。在这里,我们描述了一种自动实现的“DNA损伤后荧光积累的定量”方法,该方法大大增强了广泛使用的激光微照射技术的分析和定量,该技术用于研究DNA修复蛋白在DNA损伤位点的招募。这个开源实现(“qFADD.py”)是一个独立的软件包,可以在笔记本电脑或计算机集群上运行。我们的实现包括对核漂移的修正,对最佳拟合模型的自动网格搜索,以及对水平条纹和斑点实验进行建模的能力。为了提高统计的严谨性,网格搜索算法还包括对重复的自动模拟。作为一个实际的例子,我们提出并讨论了早期应答者PARP1对DNA损伤位点的招募动态。
{"title":"Automated modeling of protein accumulation at DNA damage sites using qFADD.py.","authors":"Samuel Bowerman,&nbsp;Jyothi Mahadevan,&nbsp;Philip Benson,&nbsp;Johannes Rudolph,&nbsp;Karolin Luger","doi":"10.1017/s2633903x22000083","DOIUrl":"https://doi.org/10.1017/s2633903x22000083","url":null,"abstract":"<p><p>Eukaryotic cells are constantly subject to DNA damage, often with detrimental consequences for the health of the organism. Cells mitigate this DNA damage through a variety of repair pathways involving a diverse and large number of different proteins. To better understand the cellular response to DNA damage, one needs accurate measurements of the accumulation, retention, and dissipation timescales of these repair proteins. Here, we describe an automated implementation of the \"quantitation of fluorescence accumulation after DNA damage\" method that greatly enhances the analysis and quantitation of the widely used technique known as laser microirradiation, which is used to study the recruitment of DNA repair proteins to sites of DNA damage. This open-source implementation (\"qFADD.py\") is available as a stand-alone software package that can be run on laptops or computer clusters. Our implementation includes corrections for nuclear drift, an automated grid search for the model of a best fit, and the ability to model both horizontal striping and speckle experiments. To improve statistical rigor, the grid-search algorithm also includes automated simulation of replicates. As a practical example, we present and discuss the recruitment dynamics of the early responder PARP1 to DNA damage sites.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9683346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40494177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
eHooke: A tool for automated image analysis of spherical bacteria based on cell cycle progression. eHooke:基于细胞周期进程的球形细菌自动图像分析工具。
Pub Date : 2021-09-24 eCollection Date: 2021-01-01 DOI: 10.1017/S2633903X21000027
Bruno M Saraiva, Ludwig Krippahl, Sérgio R Filipe, Ricardo Henriques, Mariana G Pinho

Fluorescence microscopy is a critical tool for cell biology studies on bacterial cell division and morphogenesis. Because the analysis of fluorescence microscopy images evolved beyond initial qualitative studies, numerous images analysis tools were developed to extract quantitative parameters on cell morphology and organization. To understand cellular processes required for bacterial growth and division, it is particularly important to perform such analysis in the context of cell cycle progression. However, manual assignment of cell cycle stages is laborious and prone to user bias. Although cell elongation can be used as a proxy for cell cycle progression in rod-shaped or ovoid bacteria, that is not the case for cocci, such as Staphylococcus aureus. Here, we describe eHooke, an image analysis framework developed specifically for automated analysis of microscopy images of spherical bacterial cells. eHooke contains a trained artificial neural network to automatically classify the cell cycle phase of individual S. aureus cells. Users can then apply various functions to obtain biologically relevant information on morphological features of individual cells and cellular localization of proteins, in the context of the cell cycle.

荧光显微镜是细胞生物学研究细菌细胞分裂和形态发生的重要工具。由于对荧光显微镜图像的分析已超越了最初的定性研究,因此开发了许多图像分析工具来提取细胞形态和组织的定量参数。要了解细菌生长和分裂所需的细胞过程,在细胞周期进展的背景下进行此类分析尤为重要。然而,手动分配细胞周期阶段既费力又容易造成用户偏差。虽然在杆状或卵圆形细菌中,细胞伸长可作为细胞周期进展的替代物,但在球菌(如金黄色葡萄球菌)中却并非如此。eHooke 包含一个训练有素的人工神经网络,可自动对单个金黄色葡萄球菌细胞的细胞周期阶段进行分类。然后,用户可以应用各种功能,在细胞周期的背景下获取与单个细胞形态特征和蛋白质细胞定位相关的生物信息。
{"title":"eHooke: A tool for automated image analysis of spherical bacteria based on cell cycle progression.","authors":"Bruno M Saraiva, Ludwig Krippahl, Sérgio R Filipe, Ricardo Henriques, Mariana G Pinho","doi":"10.1017/S2633903X21000027","DOIUrl":"10.1017/S2633903X21000027","url":null,"abstract":"<p><p>Fluorescence microscopy is a critical tool for cell biology studies on bacterial cell division and morphogenesis. Because the analysis of fluorescence microscopy images evolved beyond initial qualitative studies, numerous images analysis tools were developed to extract quantitative parameters on cell morphology and organization. To understand cellular processes required for bacterial growth and division, it is particularly important to perform such analysis in the context of cell cycle progression. However, manual assignment of cell cycle stages is laborious and prone to user bias. Although cell elongation can be used as a proxy for cell cycle progression in rod-shaped or ovoid bacteria, that is not the case for cocci, such as <i>Staphylococcus aureus.</i> Here, we describe eHooke, an image analysis framework developed specifically for automated analysis of microscopy images of spherical bacterial cells. eHooke contains a trained artificial neural network to automatically classify the cell cycle phase of individual <i>S. aureus</i> cells. Users can then apply various functions to obtain biologically relevant information on morphological features of individual cells and cellular localization of proteins, in the context of the cell cycle.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"1 ","pages":"e3"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/7c/41/S2633903X21000027a.PMC8724265.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39940075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2020 BioImage Analysis Survey: Community experiences and needs for the future 2020年生物图像分析调查:社区经验和未来需求
Pub Date : 2021-08-17 DOI: 10.1017/S2633903X21000039
Nasim Jamali, E. T. Dobson, K. Eliceiri, Anne E Carpenter, B. Cimini
In this paper, we summarize a global survey of 484 participants of the imaging community, conducted in 2020 through the NIH Center for Open BioImage Analysis (COBA). This 23-question survey covered experience with image analysis, scientific background and demographics, and views and requests from different members of the imaging community. Through open-ended questions we asked the community to provide feedback for the opensource tool developers and tool user groups. The community’s requests for tool developers include general improvement of tool documentation and easy-to-follow tutorials. Respondents encourage tool users to follow the best practices guidelines for imaging and ask their image analysis questions on the Scientific Community Image forum (forum.image.sc). We analyzed the community’s preferred method of learning, based on level of computational proficiency and work description. In general, written step-by-step and video tutorials are preferred methods of learning by the community, followed by interactive webinars and office hours with an expert. There is also enthusiasm for a centralized location online for existing educational resources. The survey results will help the community, especially developers, trainers, and organizations like COBA, decide how to structure and prioritize their efforts. Impact statement The Bioimage analysis community consists of software developers, imaging experts, and users, all with different expertise, scientific background, and computational skill levels. The NIH funded Center for Open Bioimage Analysis (COBA) was launched in 2020 to serve the cell biology community’s growing need for sophisticated open-source software and workflows for light microscopy image analysis. This paper shares the result of a COBA survey to assess the most urgent ongoing needs for software and training in the community and provide a helpful resource for software developers working in this domain. Here, we describe the state of open-source bioimage analysis, developers’ and users’ requests from the community, and our resulting view of common goals that would serve and strengthen the community to advance imaging science.
在本文中,我们总结了2020年通过NIH开放生物图像分析中心(COBA)对成像界484名参与者进行的全球调查。这项23个问题的调查涵盖了图像分析的经验、科学背景和人口统计学,以及来自成像社区不同成员的观点和要求。通过开放式问题,我们要求社区为开源工具开发人员和工具用户组提供反馈。社区对工具开发人员的要求包括对工具文档和易于理解的教程进行全面改进。受访者鼓励工具用户遵循成像最佳实践指南,并在科学社区图像论坛(forum.image.sc)上提出他们的图像分析问题。我们根据计算熟练程度和工作描述分析了社区首选的学习方法。一般来说,书面的循序渐进和视频教程是社区首选的学习方法,其次是互动网络研讨会和与专家的办公时间。人们还热衷于将现有的教育资源集中在网上。调查结果将帮助社区,特别是开发人员、培训人员和像COBA这样的组织,决定如何组织和优先考虑他们的工作。Bioimage分析社区由软件开发人员、成像专家和用户组成,他们都具有不同的专业知识、科学背景和计算技能水平。美国国立卫生研究院资助的开放生物图像分析中心(COBA)于2020年启动,以满足细胞生物学界对光学显微镜图像分析的复杂开源软件和工作流程日益增长的需求。本文分享了COBA调查的结果,以评估社区中对软件和培训最紧迫的持续需求,并为在该领域工作的软件开发人员提供有用的资源。在这里,我们描述了开源生物图像分析的现状,来自社区的开发者和用户的要求,以及我们对共同目标的看法,这些目标将服务并加强社区,以推进成像科学。
{"title":"2020 BioImage Analysis Survey: Community experiences and needs for the future","authors":"Nasim Jamali, E. T. Dobson, K. Eliceiri, Anne E Carpenter, B. Cimini","doi":"10.1017/S2633903X21000039","DOIUrl":"https://doi.org/10.1017/S2633903X21000039","url":null,"abstract":"In this paper, we summarize a global survey of 484 participants of the imaging community, conducted in 2020 through the NIH Center for Open BioImage Analysis (COBA). This 23-question survey covered experience with image analysis, scientific background and demographics, and views and requests from different members of the imaging community. Through open-ended questions we asked the community to provide feedback for the opensource tool developers and tool user groups. The community’s requests for tool developers include general improvement of tool documentation and easy-to-follow tutorials. Respondents encourage tool users to follow the best practices guidelines for imaging and ask their image analysis questions on the Scientific Community Image forum (forum.image.sc). We analyzed the community’s preferred method of learning, based on level of computational proficiency and work description. In general, written step-by-step and video tutorials are preferred methods of learning by the community, followed by interactive webinars and office hours with an expert. There is also enthusiasm for a centralized location online for existing educational resources. The survey results will help the community, especially developers, trainers, and organizations like COBA, decide how to structure and prioritize their efforts. Impact statement The Bioimage analysis community consists of software developers, imaging experts, and users, all with different expertise, scientific background, and computational skill levels. The NIH funded Center for Open Bioimage Analysis (COBA) was launched in 2020 to serve the cell biology community’s growing need for sophisticated open-source software and workflows for light microscopy image analysis. This paper shares the result of a COBA survey to assess the most urgent ongoing needs for software and training in the community and provide a helpful resource for software developers working in this domain. Here, we describe the state of open-source bioimage analysis, developers’ and users’ requests from the community, and our resulting view of common goals that would serve and strengthen the community to advance imaging science.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47053216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Automated detection and staging of malaria parasites from cytological smears using convolutional neural networks. 利用卷积神经网络从细胞学涂片中自动检测和分期疟疾寄生虫。
Pub Date : 2021-08-02 eCollection Date: 2021-01-01 DOI: 10.1017/S2633903X21000015
Mira S Davidson, Clare Andradi-Brown, Sabrina Yahiya, Jill Chmielewski, Aidan J O'Donnell, Pratima Gurung, Myriam D Jeninga, Parichat Prommana, Dean W Andrew, Michaela Petter, Chairat Uthaipibull, Michelle J Boyle, George W Ashdown, Jeffrey D Dvorin, Sarah E Reece, Danny W Wilson, Kane A Cunningham, D Michael Ando, Michelle Dimon, Jake Baum

Microscopic examination of blood smears remains the gold standard for laboratory inspection and diagnosis of malaria. Smear inspection is, however, time-consuming and dependent on trained microscopists with results varying in accuracy. We sought to develop an automated image analysis method to improve accuracy and standardization of smear inspection that retains capacity for expert confirmation and image archiving. Here, we present a machine learning method that achieves red blood cell (RBC) detection, differentiation between infected/uninfected cells, and parasite life stage categorization from unprocessed, heterogeneous smear images. Based on a pretrained Faster Region-Based Convolutional Neural Networks (R-CNN) model for RBC detection, our model performs accurately, with an average precision of 0.99 at an intersection-over-union threshold of 0.5. Application of a residual neural network-50 model to infected cells also performs accurately, with an area under the receiver operating characteristic curve of 0.98. Finally, combining our method with a regression model successfully recapitulates intraerythrocytic developmental cycle with accurate lifecycle stage categorization. Combined with a mobile-friendly web-based interface, called PlasmoCount, our method permits rapid navigation through and review of results for quality assurance. By standardizing assessment of Giemsa smears, our method markedly improves inspection reproducibility and presents a realistic route to both routine lab and future field-based automated malaria diagnosis.

血液涂片的显微检查仍然是实验室检查和诊断疟疾的金标准。然而,涂片检查是耗时的,并且依赖于训练有素的显微镜,结果的准确性各不相同。我们试图开发一种自动图像分析方法,以提高涂片检查的准确性和标准化,同时保留专家确认和图像存档的能力。在这里,我们提出了一种机器学习方法,可以从未经处理的异质涂片图像中实现红细胞(RBC)检测,感染/未感染细胞的区分以及寄生虫生命阶段的分类。基于预训练的更快基于区域的卷积神经网络(R-CNN)模型用于RBC检测,我们的模型执行准确,平均精度为0.99,交叉超联合阈值为0.5。残差神经网络-50模型在感染细胞上的应用也很准确,受者工作特征曲线下的面积为0.98。最后,将我们的方法与回归模型相结合,成功地概括了红细胞内发育周期,并准确地划分了生命周期阶段。结合一个移动友好的基于网络的界面,称为PlasmoCount,我们的方法允许快速导航和审查结果,以保证质量。通过标准化吉姆萨涂片的评估,我们的方法显着提高了检查的可重复性,并为常规实验室和未来基于现场的自动化疟疾诊断提供了一条现实的途径。
{"title":"Automated detection and staging of malaria parasites from cytological smears using convolutional neural networks.","authors":"Mira S Davidson,&nbsp;Clare Andradi-Brown,&nbsp;Sabrina Yahiya,&nbsp;Jill Chmielewski,&nbsp;Aidan J O'Donnell,&nbsp;Pratima Gurung,&nbsp;Myriam D Jeninga,&nbsp;Parichat Prommana,&nbsp;Dean W Andrew,&nbsp;Michaela Petter,&nbsp;Chairat Uthaipibull,&nbsp;Michelle J Boyle,&nbsp;George W Ashdown,&nbsp;Jeffrey D Dvorin,&nbsp;Sarah E Reece,&nbsp;Danny W Wilson,&nbsp;Kane A Cunningham,&nbsp;D Michael Ando,&nbsp;Michelle Dimon,&nbsp;Jake Baum","doi":"10.1017/S2633903X21000015","DOIUrl":"https://doi.org/10.1017/S2633903X21000015","url":null,"abstract":"<p><p>Microscopic examination of blood smears remains the gold standard for laboratory inspection and diagnosis of malaria. Smear inspection is, however, time-consuming and dependent on trained microscopists with results varying in accuracy. We sought to develop an automated image analysis method to improve accuracy and standardization of smear inspection that retains capacity for expert confirmation and image archiving. Here, we present a machine learning method that achieves red blood cell (RBC) detection, differentiation between infected/uninfected cells, and parasite life stage categorization from unprocessed, heterogeneous smear images. Based on a pretrained Faster Region-Based Convolutional Neural Networks (R-CNN) model for RBC detection, our model performs accurately, with an average precision of 0.99 at an intersection-over-union threshold of 0.5. Application of a residual neural network-50 model to infected cells also performs accurately, with an area under the receiver operating characteristic curve of 0.98. Finally, combining our method with a regression model successfully recapitulates intraerythrocytic developmental cycle with accurate lifecycle stage categorization. Combined with a mobile-friendly web-based interface, called PlasmoCount, our method permits rapid navigation through and review of results for quality assurance. By standardizing assessment of Giemsa smears, our method markedly improves inspection reproducibility and presents a realistic route to both routine lab and future field-based automated malaria diagnosis.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"1 ","pages":"e2"},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8724263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39940074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
EDITORIAL. 编辑
Pub Date : 2021-01-11 eCollection Date: 2021-01-01 DOI: 10.1017/S2633903X2000001X
Jean-Christophe Olivo-Marin
{"title":"EDITORIAL.","authors":"Jean-Christophe Olivo-Marin","doi":"10.1017/S2633903X2000001X","DOIUrl":"10.1017/S2633903X2000001X","url":null,"abstract":"","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":"1 1","pages":"e1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44122244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biological imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1