首页 > 最新文献

Biological imaging最新文献

英文 中文
Visualization and quality control tools for large-scale multiplex tissue analysis in TissUUmaps3 TissUUmaps3中大规模多重组织分析的可视化和质量控制工具
Pub Date : 2023-01-01 DOI: 10.1017/s2633903x23000053
Andrea Behanova, Christophe Avenel, Axel Andersson, Eduard Chelebian, Anna Klemm, Lina Wik, Arne Östman, Carolina Wählby
Abstract Large-scale multiplex tissue analysis aims to understand processes such as development and tumor formation by studying the occurrence and interaction of cells in local environments in, for example, tissue samples from patient cohorts. A typical procedure in the analysis is to delineate individual cells, classify them into cell types, and analyze their spatial relationships. All steps come with a number of challenges, and to address them and identify the bottlenecks of the analysis, it is necessary to include quality control tools in the analysis workflow. This makes it possible to optimize the steps and adjust settings in order to get better and more precise results. Additionally, the development of automated approaches for tissue analysis requires visual verification to reduce skepticism with regard to the accuracy of the results. Quality control tools could be used to build users’ trust in automated approaches. In this paper, we present three plugins for visualization and quality control in large-scale multiplex tissue analysis of microscopy images. The first plugin focuses on the quality of cell staining, the second one was made for interactive evaluation and comparison of different cell classification results, and the third one serves for reviewing interactions of different cell types.
大规模多元组织分析旨在通过研究细胞在局部环境中的发生和相互作用,例如来自患者队列的组织样本,来了解肿瘤的发展和形成过程。在分析中一个典型的程序是描绘单个细胞,将它们分类为细胞类型,并分析它们的空间关系。所有的步骤都伴随着许多挑战,为了解决这些挑战并确定分析的瓶颈,有必要在分析工作流中包含质量控制工具。这使得优化步骤和调整设置成为可能,以便获得更好更精确的结果。此外,组织分析自动化方法的发展需要视觉验证,以减少对结果准确性的怀疑。质量控制工具可以用来建立用户对自动化方法的信任。在本文中,我们提出了三个插件可视化和质量控制的大规模显微图像多重组织分析。第一个插件专注于细胞染色质量,第二个插件用于不同细胞分类结果的交互评价和比较,第三个插件用于回顾不同细胞类型的相互作用。
{"title":"Visualization and quality control tools for large-scale multiplex tissue analysis in TissUUmaps3","authors":"Andrea Behanova, Christophe Avenel, Axel Andersson, Eduard Chelebian, Anna Klemm, Lina Wik, Arne Östman, Carolina Wählby","doi":"10.1017/s2633903x23000053","DOIUrl":"https://doi.org/10.1017/s2633903x23000053","url":null,"abstract":"Abstract Large-scale multiplex tissue analysis aims to understand processes such as development and tumor formation by studying the occurrence and interaction of cells in local environments in, for example, tissue samples from patient cohorts. A typical procedure in the analysis is to delineate individual cells, classify them into cell types, and analyze their spatial relationships. All steps come with a number of challenges, and to address them and identify the bottlenecks of the analysis, it is necessary to include quality control tools in the analysis workflow. This makes it possible to optimize the steps and adjust settings in order to get better and more precise results. Additionally, the development of automated approaches for tissue analysis requires visual verification to reduce skepticism with regard to the accuracy of the results. Quality control tools could be used to build users’ trust in automated approaches. In this paper, we present three plugins for visualization and quality control in large-scale multiplex tissue analysis of microscopy images. The first plugin focuses on the quality of cell staining, the second one was made for interactive evaluation and comparison of different cell classification results, and the third one serves for reviewing interactions of different cell types.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135534442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ClusterAlign: A fiducial tracking and tilt series alignment tool for thick sample tomography. ClusterAlign:用于厚样本断层扫描的基准跟踪和倾斜序列对齐工具
Pub Date : 2022-08-05 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000071
Shahar Seifer, Michael Elbaum

Thick specimens, as encountered in cryo-scanning transmission electron tomography, offer special challenges to conventional reconstruction workflows. The visibility of features, including gold nanoparticles introduced as fiducial markers, varies strongly through the tilt series. As a result, tedious manual refinement may be required in order to produce a successful alignment. Information from highly tilted views must often be excluded to the detriment of axial resolution in the reconstruction. We introduce here an approach to tilt series alignment based on identification of fiducial particle clusters that transform coherently in rotation, essentially those that lie at similar depth. Clusters are identified by comparison of tilted views with a single untilted reference, rather than with adjacent tilts. The software, called ClusterAlign, proves robust to poor signal to noise ratio and varying visibility of the individual fiducials and is successful in carrying the alignment to the ends of the tilt series where other methods tend to fail. ClusterAlign may be used to generate a list of tracked fiducials, to align a tilt series, or to perform a complete 3D reconstruction. Tools to evaluate alignment error by projection matching are included. Execution involves no manual intervention, and adherence to standard file formats facilitates an interface with other software, particularly IMOD/etomo, tomo3d, and tomoalign.

在低温扫描透射电子断层扫描中,厚标本对传统的重建工作流程提出了特殊的挑战。特征的可见性,包括作为基准标记引入的金纳米颗粒,在倾斜系列中变化很大。因此,为了产生成功的对齐,可能需要繁琐的手工细化。在重建中,必须排除高倾斜视图的信息,以损害轴向分辨率。我们在这里介绍了一种基于识别在旋转中相干变换的基准粒子簇(基本上是那些位于相似深度的粒子簇)的倾斜序列对准方法。集群是通过与单个未倾斜参考的倾斜视图进行比较来识别的,而不是与相邻倾斜视图进行比较。该软件被称为ClusterAlign,在低信噪比和单个基准可见性变化的情况下具有鲁棒性,并且成功地将对准传递到倾斜序列的末端,而其他方法往往无法做到这一点。ClusterAlign可用于生成跟踪基准列表、对齐倾斜序列或执行完整的3D重建。包括通过投影匹配来评估对准误差的工具。执行过程不需要人工干预,并且遵守标准文件格式有助于与其他软件(特别是IMOD/etomo、tomo3d和tomoalign)进行接口。
{"title":"ClusterAlign: A fiducial tracking and tilt series alignment tool for thick sample tomography.","authors":"Shahar Seifer, Michael Elbaum","doi":"10.1017/S2633903X22000071","DOIUrl":"10.1017/S2633903X22000071","url":null,"abstract":"<p><p>Thick specimens, as encountered in cryo-scanning transmission electron tomography, offer special challenges to conventional reconstruction workflows. The visibility of features, including gold nanoparticles introduced as fiducial markers, varies strongly through the tilt series. As a result, tedious manual refinement may be required in order to produce a successful alignment. Information from highly tilted views must often be excluded to the detriment of axial resolution in the reconstruction. We introduce here an approach to tilt series alignment based on identification of fiducial particle clusters that transform coherently in rotation, essentially those that lie at similar depth. Clusters are identified by comparison of tilted views with a single untilted reference, rather than with adjacent tilts. The software, called ClusterAlign, proves robust to poor signal to noise ratio and varying visibility of the individual fiducials and is successful in carrying the alignment to the ends of the tilt series where other methods tend to fail. ClusterAlign may be used to generate a list of tracked fiducials, to align a tilt series, or to perform a complete 3D reconstruction. Tools to evaluate alignment error by projection matching are included. Execution involves no manual intervention, and adherence to standard file formats facilitates an interface with other software, particularly IMOD/etomo, tomo3d, and tomoalign.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936405/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47999576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic classification and neurotransmitter prediction of synapses in electron microscopy. 电子显微镜下突触的自动分类和神经递质预测
Pub Date : 2022-07-29 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X2200006X
Angela Zhang, S Shailja, Cezar Borba, Yishen Miao, Michael Goebel, Raphael Ruschel, Kerrianne Ryan, William Smith, B S Manjunath

This paper presents a deep-learning-based workflow to detect synapses and predict their neurotransmitter type in the primitive chordate Ciona intestinalis (Ciona) electron microscopic (EM) images. Identifying synapses from EM images to build a full map of connections between neurons is a labor-intensive process and requires significant domain expertise. Automation of synapse classification would hasten the generation and analysis of connectomes. Furthermore, inferences concerning neuron type and function from synapse features are in many cases difficult to make. Finding the connection between synapse structure and function is an important step in fully understanding a connectome. Class Activation Maps derived from the convolutional neural network provide insights on important features of synapses based on cell type and function. The main contribution of this work is in the differentiation of synapses by neurotransmitter type through the structural information in their EM images. This enables the prediction of neurotransmitter types for neurons in Ciona, which were previously unknown. The prediction model with code is available on GitHub.

摘要本文提出了一种基于深度学习的工作流程,用于在原始肠索细胞(Ciona intestinalis)的电镜(EM)图像中检测突触并预测其神经递质类型。从EM图像中识别突触以构建神经元之间连接的完整地图是一个劳动密集型过程,需要大量的领域专业知识。突触分类的自动化将加速连接体的产生和分析。此外,在许多情况下,从突触特征推断神经元类型和功能是困难的。找到突触结构和功能之间的联系是充分理解连接体的重要一步。源自卷积神经网络的类激活图提供了基于细胞类型和功能的突触重要特征的见解。这项工作的主要贡献是通过神经递质类型的EM图像中的结构信息来区分突触。这使得能够预测Ciona神经元的神经递质类型,而这些类型以前是未知的。GitHub上提供了带有代码的预测模型。
{"title":"Automatic classification and neurotransmitter prediction of synapses in electron microscopy.","authors":"Angela Zhang, S Shailja, Cezar Borba, Yishen Miao, Michael Goebel, Raphael Ruschel, Kerrianne Ryan, William Smith, B S Manjunath","doi":"10.1017/S2633903X2200006X","DOIUrl":"10.1017/S2633903X2200006X","url":null,"abstract":"<p><p>This paper presents a deep-learning-based workflow to detect synapses and predict their neurotransmitter type in the primitive chordate <i>Ciona intestinalis</i> (<i>Ciona</i>) electron microscopic (EM) images. Identifying synapses from EM images to build a full map of connections between neurons is a labor-intensive process and requires significant domain expertise. Automation of synapse classification would hasten the generation and analysis of connectomes. Furthermore, inferences concerning neuron type and function from synapse features are in many cases difficult to make. Finding the connection between synapse structure and function is an important step in fully understanding a connectome. Class Activation Maps derived from the convolutional neural network provide insights on important features of synapses based on cell type and function. The main contribution of this work is in the differentiation of synapses by neurotransmitter type through the structural information in their EM images. This enables the prediction of neurotransmitter types for neurons in <i>Ciona</i>, which were previously unknown. The prediction model with code is available on GitHub.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936391/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46758186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cell-TypeAnalyzer: A flexible Fiji/ImageJ plugin to classify cells according to user-defined criteria. Cell TypeAnalyzer:一个灵活的Fiji/ImageJ插件,可根据用户定义的标准对细胞进行分类
Pub Date : 2022-05-20 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000058
Ana Cayuela López, José A Gómez-Pedrero, Ana M O Blanco, Carlos Oscar S Sorzano

Fluorescence microscopy techniques have experienced a substantial increase in the visualization and analysis of many biological processes in life science. We describe a semiautomated and versatile tool called Cell-TypeAnalyzer to avoid the time-consuming and biased manual classification of cells according to cell types. It consists of an open-source plugin for Fiji or ImageJ to detect and classify cells in 2D images. Our workflow consists of (a) image preprocessing actions, data spatial calibration, and region of interest for analysis; (b) segmentation to isolate cells from background (optionally including user-defined preprocessing steps helping the identification of cells); (c) extraction of features from each cell; (d) filters to select relevant cells; (e) definition of specific criteria to be included in the different cell types; (f) cell classification; and (g) flexible analysis of the results. Our software provides a modular and flexible strategy to perform cell classification through a wizard-like graphical user interface in which the user is intuitively guided through each step of the analysis. This procedure may be applied in batch mode to multiple microscopy files. Once the analysis is set up, it can be automatically and efficiently performed on many images. The plugin does not require any programming skill and can analyze cells in many different acquisition setups.

摘要荧光显微镜技术在生命科学中许多生物过程的可视化和分析方面有了长足的发展。我们描述了一种称为细胞类型分析仪的半自动多功能工具,以避免根据细胞类型对细胞进行耗时且有偏见的手动分类。它由Fiji或ImageJ的开源插件组成,用于检测和分类2D图像中的细胞。我们的工作流程包括(a)图像预处理操作、数据空间校准和用于分析的感兴趣区域;(b) 将细胞与背景分离的分割(可选地包括帮助识别细胞的用户定义的预处理步骤);(c) 从每个单元提取特征;(d) 过滤器以选择相关单元格;(e) 定义要包括在不同单元格类型中的特定标准;(f) 细胞分类;以及(g)对结果进行灵活分析。我们的软件提供了一种模块化和灵活的策略,通过类似向导的图形用户界面来执行细胞分类,在该界面中,用户可以直观地完成分析的每个步骤。此程序可以批量模式应用于多个显微镜文件。一旦建立了分析,就可以在许多图像上自动有效地执行分析。该插件不需要任何编程技能,可以在许多不同的采集设置中分析细胞。
{"title":"Cell-TypeAnalyzer: A flexible Fiji/ImageJ plugin to classify cells according to user-defined criteria.","authors":"Ana Cayuela López, José A Gómez-Pedrero, Ana M O Blanco, Carlos Oscar S Sorzano","doi":"10.1017/S2633903X22000058","DOIUrl":"10.1017/S2633903X22000058","url":null,"abstract":"<p><p>Fluorescence microscopy techniques have experienced a substantial increase in the visualization and analysis of many biological processes in life science. We describe a semiautomated and versatile tool called Cell-TypeAnalyzer to avoid the time-consuming and biased manual classification of cells according to cell types. It consists of an open-source plugin for Fiji or ImageJ to detect and classify cells in 2D images. Our workflow consists of (a) image preprocessing actions, data spatial calibration, and region of interest for analysis; (b) segmentation to isolate cells from background (optionally including user-defined preprocessing steps helping the identification of cells); (c) extraction of features from each cell; (d) filters to select relevant cells; (e) definition of specific criteria to be included in the different cell types; (f) cell classification; and (g) flexible analysis of the results. Our software provides a modular and flexible strategy to perform cell classification through a wizard-like graphical user interface in which the user is intuitively guided through each step of the analysis. This procedure may be applied in batch mode to multiple microscopy files. Once the analysis is set up, it can be automatically and efficiently performed on many images. The plugin does not require any programming skill and can analyze cells in many different acquisition setups.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42889516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contour: A semi-automated segmentation and quantitation tool for cryo-soft-X-ray tomography Contour:一种用于冷冻X射线断层扫描的半自动分割和定量工具
Pub Date : 2022-05-17 DOI: 10.1017/S2633903X22000046
Kamal L. Nahas, João Ferreira Fernandes, Nina Vyas, C. Crump, Stephen Graham, M. Harkiolaki
Abstract Cryo-soft-X-ray tomography is being increasingly used in biological research to study the morphology of cellular compartments and how they change in response to different stimuli, such as viral infections. Segmentation of these compartments is limited by time-consuming manual tools or machine learning algorithms that require extensive time and effort to train. Here we describe Contour, a new, easy-to-use, highly automated segmentation tool that enables accelerated segmentation of tomograms to delineate distinct cellular compartments. Using Contour, cellular structures can be segmented based on their projection intensity and geometrical width by applying a threshold range to the image and excluding noise smaller in width than the cellular compartments of interest. This method is less laborious and less prone to errors from human judgement than current tools that require features to be manually traced, and it does not require training datasets as would machine-learning driven segmentation. We show that high-contrast compartments such as mitochondria, lipid droplets, and features at the cell surface can be easily segmented with this technique in the context of investigating herpes simplex virus 1 infection. Contour can extract geometric measurements from 3D segmented volumes, providing a new method to quantitate cryo-soft-X-ray tomography data. Contour can be freely downloaded at github.com/kamallouisnahas/Contour.
摘要X射线冷冻断层扫描正越来越多地用于生物学研究,以研究细胞隔室的形态以及它们如何在不同刺激(如病毒感染)下发生变化。这些隔间的分割受到耗时的手动工具或机器学习算法的限制,这些工具或算法需要大量的时间和精力来训练。在这里,我们描述了Contour,这是一种新的、易于使用的、高度自动化的分割工具,能够加速断层图像的分割,以描绘不同的细胞隔室。使用Contour,可以通过将阈值范围应用于图像并排除宽度小于感兴趣的细胞隔室的噪声,基于细胞结构的投影强度和几何宽度来分割细胞结构。与当前需要手动跟踪特征的工具相比,这种方法不那么费力,也不容易出现人为判断的错误,而且它不需要像机器学习驱动的分割那样训练数据集。我们表明,在研究单纯疱疹病毒1型感染的背景下,使用该技术可以很容易地分割线粒体、脂滴和细胞表面特征等高对比度区室。Contour可以从3D分割体积中提取几何测量值,为定量X射线冷冻断层扫描数据提供了一种新方法。Contour可以在github.com/kamallouisnahas/Contour上免费下载。
{"title":"Contour: A semi-automated segmentation and quantitation tool for cryo-soft-X-ray tomography","authors":"Kamal L. Nahas, João Ferreira Fernandes, Nina Vyas, C. Crump, Stephen Graham, M. Harkiolaki","doi":"10.1017/S2633903X22000046","DOIUrl":"https://doi.org/10.1017/S2633903X22000046","url":null,"abstract":"Abstract Cryo-soft-X-ray tomography is being increasingly used in biological research to study the morphology of cellular compartments and how they change in response to different stimuli, such as viral infections. Segmentation of these compartments is limited by time-consuming manual tools or machine learning algorithms that require extensive time and effort to train. Here we describe Contour, a new, easy-to-use, highly automated segmentation tool that enables accelerated segmentation of tomograms to delineate distinct cellular compartments. Using Contour, cellular structures can be segmented based on their projection intensity and geometrical width by applying a threshold range to the image and excluding noise smaller in width than the cellular compartments of interest. This method is less laborious and less prone to errors from human judgement than current tools that require features to be manually traced, and it does not require training datasets as would machine-learning driven segmentation. We show that high-contrast compartments such as mitochondria, lipid droplets, and features at the cell surface can be easily segmented with this technique in the context of investigating herpes simplex virus 1 infection. Contour can extract geometric measurements from 3D segmented volumes, providing a new method to quantitate cryo-soft-X-ray tomography data. Contour can be freely downloaded at github.com/kamallouisnahas/Contour.","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41931766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D cell morphology detection by association for embryo heart morphogenesis. 胚胎心脏形态发生的三维细胞形态学关联检测
Pub Date : 2022-04-22 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000022
Rituparna Sarkar, Daniel Darby, Sigolène Meilhac, Jean-Christophe Olivo-Marin

Advances in tissue engineering for cardiac regenerative medicine require cellular-level understanding of the mechanism of cardiac muscle growth during embryonic developmental stage. Computational methods to automatize cell segmentation in 3D and deliver accurate, quantitative morphology of cardiomyocytes, are imperative to provide insight into cell behavior underlying cardiac tissue growth. Detecting individual cells from volumetric images of dense tissue, poised with low signal-to-noise ratio and severe intensity in homogeneity, is a challenging task. In this article, we develop a robust segmentation tool capable of extracting cellular morphological parameters from 3D multifluorescence images of murine heart, captured via light-sheet microscopy. The proposed pipeline incorporates a neural network for 2D detection of nuclei and cell membranes. A graph-based global association employs the 2D nuclei detections to reconstruct 3D nuclei. A novel optimization embedding the network flow algorithm in an alternating direction method of multipliers is proposed to solve the global object association problem. The associated 3D nuclei serve as the initialization of an active mesh model to obtain the 3D segmentation of individual myocardial cells. The efficiency of our method over the state-of-the-art methods is observed via various qualitative and quantitative evaluation.

摘要心脏再生医学组织工程的进展需要细胞水平上了解胚胎发育阶段心肌生长的机制。实现3D细胞分割自动化并提供准确、定量的心肌细胞形态的计算方法,对于深入了解心脏组织生长背后的细胞行为至关重要。从致密组织的体积图像中检测单个细胞是一项具有挑战性的任务,该图像具有低信噪比和严重的同质性。在这篇文章中,我们开发了一种强大的分割工具,能够从通过光片显微镜捕获的小鼠心脏的3D多荧光图像中提取细胞形态参数。所提出的管道结合了用于细胞核和细胞膜的2D检测的神经网络。基于图的全局关联采用2D核检测来重建3D核。为了解决全局对象关联问题,提出了一种新的优化方法,将网络流算法嵌入到乘法器的交替方向方法中。相关联的3D细胞核用作主动网格模型的初始化,以获得单个心肌细胞的3D分割。通过各种定性和定量评估,观察到我们的方法相对于最先进方法的效率。
{"title":"3D cell morphology detection by association for embryo heart morphogenesis.","authors":"Rituparna Sarkar, Daniel Darby, Sigolène Meilhac, Jean-Christophe Olivo-Marin","doi":"10.1017/S2633903X22000022","DOIUrl":"10.1017/S2633903X22000022","url":null,"abstract":"<p><p>Advances in tissue engineering for cardiac regenerative medicine require cellular-level understanding of the mechanism of cardiac muscle growth during embryonic developmental stage. Computational methods to automatize cell segmentation in 3D and deliver accurate, quantitative morphology of cardiomyocytes, are imperative to provide insight into cell behavior underlying cardiac tissue growth. Detecting individual cells from volumetric images of dense tissue, poised with low signal-to-noise ratio and severe intensity in homogeneity, is a challenging task. In this article, we develop a robust segmentation tool capable of extracting cellular morphological parameters from 3D multifluorescence images of murine heart, captured via light-sheet microscopy. The proposed pipeline incorporates a neural network for 2D detection of nuclei and cell membranes. A graph-based global association employs the 2D nuclei detections to reconstruct 3D nuclei. A novel optimization embedding the network flow algorithm in an alternating direction method of multipliers is proposed to solve the global object association problem. The associated 3D nuclei serve as the initialization of an active mesh model to obtain the 3D segmentation of individual myocardial cells. The efficiency of our method over the state-of-the-art methods is observed via various qualitative and quantitative evaluation.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47401934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepSpot: A deep neural network for RNA spot enhancement in single-molecule fluorescence in-situ hybridization microscopy images. DeepSpot:一种用于单分子荧光原位杂交显微镜图像中RNA斑点增强的深度神经网络
Pub Date : 2022-04-19 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000034
Emmanuel Bouilhol, Anca F Savulescu, Edgar Lefevre, Benjamin Dartigues, Robyn Brackin, Macha Nikolski

Detection of RNA spots in single-molecule fluorescence in-situ hybridization microscopy images remains a difficult task, especially when applied to large volumes of data. The variable intensity of RNA spots combined with the high noise level of the images often requires manual adjustment of the spot detection thresholds for each image. In this work, we introduce DeepSpot, a Deep Learning-based tool specifically designed for RNA spot enhancement that enables spot detection without the need to resort to image per image parameter tuning. We show how our method can enable downstream accurate spot detection. DeepSpot's architecture is inspired by small object detection approaches. It incorporates dilated convolutions into a module specifically designed for context aggregation for small object and uses Residual Convolutions to propagate this information along the network. This enables DeepSpot to enhance all RNA spots to the same intensity, and thus circumvents the need for parameter tuning. We evaluated how easily spots can be detected in images enhanced with our method by testing DeepSpot on 20 simulated and 3 experimental datasets, and showed that accuracy of more than 97% is achieved. Moreover, comparison with alternative deep learning approaches for mRNA spot detection (deepBlink) indicated that DeepSpot provides more precise mRNA detection. In addition, we generated single-molecule fluorescence in-situ hybridization images of mouse fibroblasts in a wound healing assay to evaluate whether DeepSpot enhancement can enable seamless mRNA spot detection and thus streamline studies of localized mRNA expression in cells.

摘要在单分子荧光原位杂交显微镜图像中检测RNA斑点仍然是一项艰巨的任务,尤其是在应用于大量数据时。RNA斑点的可变强度与图像的高噪声水平相结合通常需要手动调整每个图像的斑点检测阈值。在这项工作中,我们介绍了DeepSpot,这是一种专门为RNA斑点增强设计的基于深度学习的工具,无需对每个图像的参数进行调整即可实现斑点检测。我们展示了我们的方法如何能够实现下游精确的斑点检测。DeepSpot的架构受到小物体检测方法的启发。它将扩展卷积合并到一个专门为小对象上下文聚合设计的模块中,并使用残差卷积沿网络传播这些信息。这使得DeepSpot能够将所有RNA斑点增强到相同的强度,从而避免了参数调整的需要。我们通过在20个模拟数据集和3个实验数据集上测试DeepSpot,评估了在用我们的方法增强的图像中检测斑点的容易程度,结果表明,准确率超过97%。此外,与用于mRNA斑点检测的替代深度学习方法(deepBlink)的比较表明,DeepSpot提供了更精确的mRNA检测。此外,我们在伤口愈合试验中生成了小鼠成纤维细胞的单分子荧光原位杂交图像,以评估DeepSpot增强是否能够实现无缝的mRNA斑点检测,从而简化细胞中定位mRNA表达的研究。
{"title":"DeepSpot: A deep neural network for RNA spot enhancement in single-molecule fluorescence in-situ hybridization microscopy images.","authors":"Emmanuel Bouilhol, Anca F Savulescu, Edgar Lefevre, Benjamin Dartigues, Robyn Brackin, Macha Nikolski","doi":"10.1017/S2633903X22000034","DOIUrl":"10.1017/S2633903X22000034","url":null,"abstract":"<p><p>Detection of RNA spots in single-molecule fluorescence in-situ hybridization microscopy images remains a difficult task, especially when applied to large volumes of data. The variable intensity of RNA spots combined with the high noise level of the images often requires manual adjustment of the spot detection thresholds for each image. In this work, we introduce DeepSpot, a Deep Learning-based tool specifically designed for RNA spot enhancement that enables spot detection without the need to resort to image per image parameter tuning. We show how our method can enable downstream accurate spot detection. DeepSpot's architecture is inspired by small object detection approaches. It incorporates dilated convolutions into a module specifically designed for context aggregation for small object and uses Residual Convolutions to propagate this information along the network. This enables DeepSpot to enhance all RNA spots to the same intensity, and thus circumvents the need for parameter tuning. We evaluated how easily spots can be detected in images enhanced with our method by testing DeepSpot on 20 simulated and 3 experimental datasets, and showed that accuracy of more than 97% is achieved. Moreover, comparison with alternative deep learning approaches for mRNA spot detection (deepBlink) indicated that DeepSpot provides more precise mRNA detection. In addition, we generated single-molecule fluorescence in-situ hybridization images of mouse fibroblasts in a wound healing assay to evaluate whether DeepSpot enhancement can enable seamless mRNA spot detection and thus streamline studies of localized mRNA expression in cells.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951802/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42762949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COL0RME: Super-resolution microscopy based on sparse blinking/fluctuating fluorophore localization and intensity estimation. COL0RME:基于稀疏闪烁/波动荧光团定位和强度估计的超分辨率显微镜
Pub Date : 2022-02-16 eCollection Date: 2022-01-01 DOI: 10.1017/S2633903X22000010
Vasiliki Stergiopoulou, Luca Calatroni, Henrique de Morais Goulart, Sébastien Schaub, Laure Blanc-Féraud

To overcome the physical barriers caused by light diffraction, super-resolution techniques are often applied in fluorescence microscopy. State-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations, as sufficiently high spatio-temporal resolution for live-cell imaging can be achieved using common microscopes and conventional fluorescent dyes. Based on this idea, we present COL0RME, a method for covariance-based super-resolution microscopy with intensity estimation, which achieves good spatio-temporal resolution by solving a sparse optimization problem in the covariance domain and discuss automatic parameter selection strategies. The method is composed of two steps: the former where both the emitters' independence and the sparse distribution of the fluorescent molecules are exploited to provide an accurate localization; the latter where real intensity values are estimated given the computed support. The paper is furnished with several numerical results both on synthetic and real fluorescence microscopy images and several comparisons with state-of-the art approaches are provided. Our results show that COL0RME outperforms competing methods exploiting analogously temporal fluctuations; in particular, it achieves better localization, reduces background artifacts, and avoids fine parameter tuning.

摘要为了克服光衍射引起的物理障碍,超分辨率技术经常应用于荧光显微镜。现有技术的方法需要特定且经常要求苛刻的采集条件,以实现足够的空间和时间分辨率。分析荧光分子的随机波动为上述限制提供了解决方案,因为使用普通显微镜和常规荧光染料可以实现足够高的活细胞成像时空分辨率。基于这一思想,我们提出了COL0RME,这是一种基于协方差的具有强度估计的${mathrm{ell}}_0$超分辨率显微镜方法,通过解决协方差域中的稀疏优化问题来实现良好的时空分辨率,并讨论了参数的自动选择策略。该方法由两个步骤组成:前者利用发射器的独立性和荧光分子的稀疏分布来提供准确的定位;后者在给定计算的支持的情况下估计真实强度值。本文提供了合成和真实荧光显微镜图像的几个数值结果,并与现有技术进行了一些比较。我们的结果表明,COL0RME优于利用类似时间波动的竞争方法;特别地,它实现了更好的定位,减少了背景伪影,并避免了参数微调。
{"title":"COL0RME: Super-resolution microscopy based on sparse blinking/fluctuating fluorophore localization and intensity estimation.","authors":"Vasiliki Stergiopoulou, Luca Calatroni, Henrique de Morais Goulart, Sébastien Schaub, Laure Blanc-Féraud","doi":"10.1017/S2633903X22000010","DOIUrl":"10.1017/S2633903X22000010","url":null,"abstract":"<p><p>To overcome the physical barriers caused by light diffraction, super-resolution techniques are often applied in fluorescence microscopy. State-of-the-art approaches require specific and often demanding acquisition conditions to achieve adequate levels of both spatial and temporal resolution. Analyzing the stochastic fluctuations of the fluorescent molecules provides a solution to the aforementioned limitations, as sufficiently high spatio-temporal resolution for live-cell imaging can be achieved using common microscopes and conventional fluorescent dyes. Based on this idea, we present COL0RME, a method for covariance-based super-resolution microscopy with intensity estimation, which achieves good spatio-temporal resolution by solving a sparse optimization problem in the covariance domain and discuss automatic parameter selection strategies. The method is composed of two steps: the former where both the emitters' independence and the sparse distribution of the fluorescent molecules are exploited to provide an accurate localization; the latter where real intensity values are estimated given the computed support. The paper is furnished with several numerical results both on synthetic and real fluorescence microscopy images and several comparisons with state-of-the art approaches are provided. Our results show that COL0RME outperforms competing methods exploiting analogously temporal fluctuations; in particular, it achieves better localization, reduces background artifacts, and avoids fine parameter tuning.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951805/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46297142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated modeling of protein accumulation at DNA damage sites using qFADD.py. 使用qFADD.py自动建模DNA损伤位点的蛋白质积累。
Pub Date : 2022-01-01 Epub Date: 2022-08-30 DOI: 10.1017/s2633903x22000083
Samuel Bowerman, Jyothi Mahadevan, Philip Benson, Johannes Rudolph, Karolin Luger

Eukaryotic cells are constantly subject to DNA damage, often with detrimental consequences for the health of the organism. Cells mitigate this DNA damage through a variety of repair pathways involving a diverse and large number of different proteins. To better understand the cellular response to DNA damage, one needs accurate measurements of the accumulation, retention, and dissipation timescales of these repair proteins. Here, we describe an automated implementation of the "quantitation of fluorescence accumulation after DNA damage" method that greatly enhances the analysis and quantitation of the widely used technique known as laser microirradiation, which is used to study the recruitment of DNA repair proteins to sites of DNA damage. This open-source implementation ("qFADD.py") is available as a stand-alone software package that can be run on laptops or computer clusters. Our implementation includes corrections for nuclear drift, an automated grid search for the model of a best fit, and the ability to model both horizontal striping and speckle experiments. To improve statistical rigor, the grid-search algorithm also includes automated simulation of replicates. As a practical example, we present and discuss the recruitment dynamics of the early responder PARP1 to DNA damage sites.

真核细胞经常受到DNA损伤,常常对生物体的健康造成有害的后果。细胞通过多种修复途径来减轻这种DNA损伤,这些修复途径涉及多种和大量不同的蛋白质。为了更好地理解细胞对DNA损伤的反应,人们需要精确测量这些修复蛋白的积累、保留和耗散时间尺度。在这里,我们描述了一种自动实现的“DNA损伤后荧光积累的定量”方法,该方法大大增强了广泛使用的激光微照射技术的分析和定量,该技术用于研究DNA修复蛋白在DNA损伤位点的招募。这个开源实现(“qFADD.py”)是一个独立的软件包,可以在笔记本电脑或计算机集群上运行。我们的实现包括对核漂移的修正,对最佳拟合模型的自动网格搜索,以及对水平条纹和斑点实验进行建模的能力。为了提高统计的严谨性,网格搜索算法还包括对重复的自动模拟。作为一个实际的例子,我们提出并讨论了早期应答者PARP1对DNA损伤位点的招募动态。
{"title":"Automated modeling of protein accumulation at DNA damage sites using qFADD.py.","authors":"Samuel Bowerman,&nbsp;Jyothi Mahadevan,&nbsp;Philip Benson,&nbsp;Johannes Rudolph,&nbsp;Karolin Luger","doi":"10.1017/s2633903x22000083","DOIUrl":"https://doi.org/10.1017/s2633903x22000083","url":null,"abstract":"<p><p>Eukaryotic cells are constantly subject to DNA damage, often with detrimental consequences for the health of the organism. Cells mitigate this DNA damage through a variety of repair pathways involving a diverse and large number of different proteins. To better understand the cellular response to DNA damage, one needs accurate measurements of the accumulation, retention, and dissipation timescales of these repair proteins. Here, we describe an automated implementation of the \"quantitation of fluorescence accumulation after DNA damage\" method that greatly enhances the analysis and quantitation of the widely used technique known as laser microirradiation, which is used to study the recruitment of DNA repair proteins to sites of DNA damage. This open-source implementation (\"qFADD.py\") is available as a stand-alone software package that can be run on laptops or computer clusters. Our implementation includes corrections for nuclear drift, an automated grid search for the model of a best fit, and the ability to model both horizontal striping and speckle experiments. To improve statistical rigor, the grid-search algorithm also includes automated simulation of replicates. As a practical example, we present and discuss the recruitment dynamics of the early responder PARP1 to DNA damage sites.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9683346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40494177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
eHooke: A tool for automated image analysis of spherical bacteria based on cell cycle progression. eHooke:基于细胞周期进程的球形细菌自动图像分析工具。
Pub Date : 2021-09-24 eCollection Date: 2021-01-01 DOI: 10.1017/S2633903X21000027
Bruno M Saraiva, Ludwig Krippahl, Sérgio R Filipe, Ricardo Henriques, Mariana G Pinho

Fluorescence microscopy is a critical tool for cell biology studies on bacterial cell division and morphogenesis. Because the analysis of fluorescence microscopy images evolved beyond initial qualitative studies, numerous images analysis tools were developed to extract quantitative parameters on cell morphology and organization. To understand cellular processes required for bacterial growth and division, it is particularly important to perform such analysis in the context of cell cycle progression. However, manual assignment of cell cycle stages is laborious and prone to user bias. Although cell elongation can be used as a proxy for cell cycle progression in rod-shaped or ovoid bacteria, that is not the case for cocci, such as Staphylococcus aureus. Here, we describe eHooke, an image analysis framework developed specifically for automated analysis of microscopy images of spherical bacterial cells. eHooke contains a trained artificial neural network to automatically classify the cell cycle phase of individual S. aureus cells. Users can then apply various functions to obtain biologically relevant information on morphological features of individual cells and cellular localization of proteins, in the context of the cell cycle.

荧光显微镜是细胞生物学研究细菌细胞分裂和形态发生的重要工具。由于对荧光显微镜图像的分析已超越了最初的定性研究,因此开发了许多图像分析工具来提取细胞形态和组织的定量参数。要了解细菌生长和分裂所需的细胞过程,在细胞周期进展的背景下进行此类分析尤为重要。然而,手动分配细胞周期阶段既费力又容易造成用户偏差。虽然在杆状或卵圆形细菌中,细胞伸长可作为细胞周期进展的替代物,但在球菌(如金黄色葡萄球菌)中却并非如此。eHooke 包含一个训练有素的人工神经网络,可自动对单个金黄色葡萄球菌细胞的细胞周期阶段进行分类。然后,用户可以应用各种功能,在细胞周期的背景下获取与单个细胞形态特征和蛋白质细胞定位相关的生物信息。
{"title":"eHooke: A tool for automated image analysis of spherical bacteria based on cell cycle progression.","authors":"Bruno M Saraiva, Ludwig Krippahl, Sérgio R Filipe, Ricardo Henriques, Mariana G Pinho","doi":"10.1017/S2633903X21000027","DOIUrl":"10.1017/S2633903X21000027","url":null,"abstract":"<p><p>Fluorescence microscopy is a critical tool for cell biology studies on bacterial cell division and morphogenesis. Because the analysis of fluorescence microscopy images evolved beyond initial qualitative studies, numerous images analysis tools were developed to extract quantitative parameters on cell morphology and organization. To understand cellular processes required for bacterial growth and division, it is particularly important to perform such analysis in the context of cell cycle progression. However, manual assignment of cell cycle stages is laborious and prone to user bias. Although cell elongation can be used as a proxy for cell cycle progression in rod-shaped or ovoid bacteria, that is not the case for cocci, such as <i>Staphylococcus aureus.</i> Here, we describe eHooke, an image analysis framework developed specifically for automated analysis of microscopy images of spherical bacterial cells. eHooke contains a trained artificial neural network to automatically classify the cell cycle phase of individual <i>S. aureus</i> cells. Users can then apply various functions to obtain biologically relevant information on morphological features of individual cells and cellular localization of proteins, in the context of the cell cycle.</p>","PeriodicalId":72371,"journal":{"name":"Biological imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/7c/41/S2633903X21000027a.PMC8724265.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39940075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biological imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1