细胞-TRACTR:基于变压器的细胞端到端分割和跟踪模型

bioRxiv Pub Date : 2024-07-16 DOI:10.1101/2024.07.11.603075
Owen M. O’Connor, M. Dunlop
{"title":"细胞-TRACTR:基于变压器的细胞端到端分割和跟踪模型","authors":"Owen M. O’Connor, M. Dunlop","doi":"10.1101/2024.07.11.603075","DOIUrl":null,"url":null,"abstract":"Deep learning-based methods for identifying and tracking cells within microscopy images have revolutionized the speed and throughput of data analysis. These methods for analyzing biological and medical data have capitalized on advances from the broader computer vision field. However, cell tracking can present unique challenges, with frequent cell division events and the need to track many objects with similar visual appearances complicating analysis. Existing architectures developed for cell tracking based on convolutional neural networks (CNNs) have tended to fall short in managing the spatial and global contextual dependencies that are crucial for tracking cells. To overcome these limitations, we introduce Cell-TRACTR (Transformer with Attention for Cell Tracking and Recognition), a novel deep learning model that uses a transformer-based architecture. The attention mechanism inherent in transformers facilitates long-range connections, effectively linking features across different spatial regions, which is critical for robust cell tracking. Cell-TRACTR operates in an end-to-end manner, simultaneously segmenting and tracking cells without the need for post-processing. Alongside this model, we introduce the Cell-HOTA metric, an extension of the Higher Order Tracking Accuracy (HOTA) metric that we adapted to assess cell division. Cell-HOTA differs from standard cell tracking metrics by offering a balanced and easily interpretable assessment of detection, association, and division accuracy. We test our Cell-TRACTR model on datasets of bacteria growing within a defined microfluidic geometry and mammalian cells growing freely in two dimensions. Our results demonstrate that Cell-TRACTR exhibits excellent performance in tracking and division accuracy compared to state-of-the-art algorithms, while also matching traditional benchmarks in detection accuracy. This work establishes a new framework for employing transformer-based models in cell segmentation and tracking. Author Summary Understanding the growth, movement, and gene expression dynamics of individual cells is critical for studies in a wide range of areas, from antibiotic resistance to cancer. Monitoring individual cells can reveal unique insights that are obscured by population averages. Although modern microscopy techniques have vastly improved researchers’ ability to collect data, tracking individual cells over time remains a challenge, particularly due to complexities such as cell division and non-linear cell movements. To address this, we developed a new transformer-based model called Cell-TRACTR that can segment and track single cells without the need for post-processing. The strength of the transformer architecture lies in its attention mechanism, which integrates global context. Attention makes this model particularly well suited for tracking cells across a sequence of images. In addition to the Cell-TRACTR model, we introduce a new metric, Cell-HOTA, to evaluate tracking algorithms in terms of detection, association, and division accuracy. The metric breaks down performance into sub-metrics, helping researchers pinpoint the strengths and weaknesses of their tracking algorithm. When compared to state-of-the-art algorithms, Cell-TRACTR meets or exceeds many current benchmarks, offering excellent potential as a new tool for the analysis of series of images with single-cell resolution.","PeriodicalId":9124,"journal":{"name":"bioRxiv","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cell-TRACTR: A transformer-based model for end-to-end segmentation and tracking of cells\",\"authors\":\"Owen M. O’Connor, M. Dunlop\",\"doi\":\"10.1101/2024.07.11.603075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning-based methods for identifying and tracking cells within microscopy images have revolutionized the speed and throughput of data analysis. These methods for analyzing biological and medical data have capitalized on advances from the broader computer vision field. However, cell tracking can present unique challenges, with frequent cell division events and the need to track many objects with similar visual appearances complicating analysis. Existing architectures developed for cell tracking based on convolutional neural networks (CNNs) have tended to fall short in managing the spatial and global contextual dependencies that are crucial for tracking cells. To overcome these limitations, we introduce Cell-TRACTR (Transformer with Attention for Cell Tracking and Recognition), a novel deep learning model that uses a transformer-based architecture. The attention mechanism inherent in transformers facilitates long-range connections, effectively linking features across different spatial regions, which is critical for robust cell tracking. Cell-TRACTR operates in an end-to-end manner, simultaneously segmenting and tracking cells without the need for post-processing. Alongside this model, we introduce the Cell-HOTA metric, an extension of the Higher Order Tracking Accuracy (HOTA) metric that we adapted to assess cell division. Cell-HOTA differs from standard cell tracking metrics by offering a balanced and easily interpretable assessment of detection, association, and division accuracy. We test our Cell-TRACTR model on datasets of bacteria growing within a defined microfluidic geometry and mammalian cells growing freely in two dimensions. Our results demonstrate that Cell-TRACTR exhibits excellent performance in tracking and division accuracy compared to state-of-the-art algorithms, while also matching traditional benchmarks in detection accuracy. This work establishes a new framework for employing transformer-based models in cell segmentation and tracking. Author Summary Understanding the growth, movement, and gene expression dynamics of individual cells is critical for studies in a wide range of areas, from antibiotic resistance to cancer. Monitoring individual cells can reveal unique insights that are obscured by population averages. Although modern microscopy techniques have vastly improved researchers’ ability to collect data, tracking individual cells over time remains a challenge, particularly due to complexities such as cell division and non-linear cell movements. To address this, we developed a new transformer-based model called Cell-TRACTR that can segment and track single cells without the need for post-processing. The strength of the transformer architecture lies in its attention mechanism, which integrates global context. Attention makes this model particularly well suited for tracking cells across a sequence of images. In addition to the Cell-TRACTR model, we introduce a new metric, Cell-HOTA, to evaluate tracking algorithms in terms of detection, association, and division accuracy. The metric breaks down performance into sub-metrics, helping researchers pinpoint the strengths and weaknesses of their tracking algorithm. When compared to state-of-the-art algorithms, Cell-TRACTR meets or exceeds many current benchmarks, offering excellent potential as a new tool for the analysis of series of images with single-cell resolution.\",\"PeriodicalId\":9124,\"journal\":{\"name\":\"bioRxiv\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"bioRxiv\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.07.11.603075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"bioRxiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.07.11.603075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于深度学习的显微图像细胞识别和跟踪方法彻底改变了数据分析的速度和吞吐量。这些用于分析生物和医学数据的方法利用了计算机视觉领域的广泛进展。然而,细胞跟踪可能会带来独特的挑战,频繁的细胞分裂事件和跟踪许多具有相似视觉外观的物体的需要使分析变得复杂。现有的基于卷积神经网络(CNN)的细胞跟踪架构往往无法管理对细胞跟踪至关重要的空间和全局上下文相关性。为了克服这些局限性,我们引入了 Cell-TRACTR(用于细胞追踪和识别的具有注意力的变压器),这是一种新型深度学习模型,采用基于变压器的架构。变压器固有的注意力机制促进了长距离连接,有效连接了不同空间区域的特征,这对于稳健的细胞追踪至关重要。Cell-TRACTR 以端到端方式运行,同时分割和跟踪细胞,无需后处理。除了这个模型,我们还引入了细胞-HOTA 指标,它是高阶跟踪精度(HOTA)指标的扩展,我们将其调整用于评估细胞分裂。Cell-HOTA 与标准的细胞追踪指标不同,它能对检测、关联和分裂准确性进行平衡且易于解释的评估。我们在确定的微流体几何形状内生长的细菌数据集和在二维空间自由生长的哺乳动物细胞数据集上测试了我们的 Cell-TRACTR 模型。结果表明,与最先进的算法相比,Cell-TRACTR 在跟踪和分裂准确性方面表现出色,同时在检测准确性方面也与传统基准相当。这项工作为在细胞分割和跟踪中采用基于变压器的模型建立了一个新的框架。作者简介 了解单个细胞的生长、运动和基因表达动态对于从抗生素抗性到癌症等广泛领域的研究至关重要。对单个细胞的监测可以揭示被群体平均值所掩盖的独特见解。尽管现代显微镜技术大大提高了研究人员收集数据的能力,但长期跟踪单个细胞仍然是一项挑战,特别是由于细胞分裂和非线性细胞运动等复杂性。为了解决这个问题,我们开发了一种新的基于变压器的模型--Cell-TRACTR,它可以分割和跟踪单个细胞,而无需进行后处理。变压器架构的优势在于其整合了全局上下文的关注机制。注意力使该模型特别适合在一系列图像中追踪细胞。除了 Cell-TRACTR 模型外,我们还引入了一个新指标--Cell-HOTA,用于评估跟踪算法的检测、关联和分割准确性。该指标将性能分解为多个子指标,帮助研究人员找出跟踪算法的优缺点。与最先进的算法相比,Cell-TRACTR 达到或超过了目前的许多基准,为分析具有单细胞分辨率的系列图像提供了一个极具潜力的新工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cell-TRACTR: A transformer-based model for end-to-end segmentation and tracking of cells
Deep learning-based methods for identifying and tracking cells within microscopy images have revolutionized the speed and throughput of data analysis. These methods for analyzing biological and medical data have capitalized on advances from the broader computer vision field. However, cell tracking can present unique challenges, with frequent cell division events and the need to track many objects with similar visual appearances complicating analysis. Existing architectures developed for cell tracking based on convolutional neural networks (CNNs) have tended to fall short in managing the spatial and global contextual dependencies that are crucial for tracking cells. To overcome these limitations, we introduce Cell-TRACTR (Transformer with Attention for Cell Tracking and Recognition), a novel deep learning model that uses a transformer-based architecture. The attention mechanism inherent in transformers facilitates long-range connections, effectively linking features across different spatial regions, which is critical for robust cell tracking. Cell-TRACTR operates in an end-to-end manner, simultaneously segmenting and tracking cells without the need for post-processing. Alongside this model, we introduce the Cell-HOTA metric, an extension of the Higher Order Tracking Accuracy (HOTA) metric that we adapted to assess cell division. Cell-HOTA differs from standard cell tracking metrics by offering a balanced and easily interpretable assessment of detection, association, and division accuracy. We test our Cell-TRACTR model on datasets of bacteria growing within a defined microfluidic geometry and mammalian cells growing freely in two dimensions. Our results demonstrate that Cell-TRACTR exhibits excellent performance in tracking and division accuracy compared to state-of-the-art algorithms, while also matching traditional benchmarks in detection accuracy. This work establishes a new framework for employing transformer-based models in cell segmentation and tracking. Author Summary Understanding the growth, movement, and gene expression dynamics of individual cells is critical for studies in a wide range of areas, from antibiotic resistance to cancer. Monitoring individual cells can reveal unique insights that are obscured by population averages. Although modern microscopy techniques have vastly improved researchers’ ability to collect data, tracking individual cells over time remains a challenge, particularly due to complexities such as cell division and non-linear cell movements. To address this, we developed a new transformer-based model called Cell-TRACTR that can segment and track single cells without the need for post-processing. The strength of the transformer architecture lies in its attention mechanism, which integrates global context. Attention makes this model particularly well suited for tracking cells across a sequence of images. In addition to the Cell-TRACTR model, we introduce a new metric, Cell-HOTA, to evaluate tracking algorithms in terms of detection, association, and division accuracy. The metric breaks down performance into sub-metrics, helping researchers pinpoint the strengths and weaknesses of their tracking algorithm. When compared to state-of-the-art algorithms, Cell-TRACTR meets or exceeds many current benchmarks, offering excellent potential as a new tool for the analysis of series of images with single-cell resolution.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
DGTS overproduced in seed plants is excluded from plastid membranes and promotes endomembrane expansion A distant TANGO1 family member promotes vitellogenin export from the ER in C. elegans Diet-induced obesity mediated through Estrogen-Related Receptor α is independent of intestinal function The Rbfox1/LASR complex controls alternative pre-mRNA splicing by recognition of multi-part RNA regulatory modules The Once and Future Fish: 1300 years of Atlantic herring population structure and demography revealed through ancient DNA and mixed-stock analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1