{"title":"细胞-TRACTR:基于变压器的细胞端到端分割和跟踪模型","authors":"Owen M. O’Connor, M. Dunlop","doi":"10.1101/2024.07.11.603075","DOIUrl":null,"url":null,"abstract":"Deep learning-based methods for identifying and tracking cells within microscopy images have revolutionized the speed and throughput of data analysis. These methods for analyzing biological and medical data have capitalized on advances from the broader computer vision field. However, cell tracking can present unique challenges, with frequent cell division events and the need to track many objects with similar visual appearances complicating analysis. Existing architectures developed for cell tracking based on convolutional neural networks (CNNs) have tended to fall short in managing the spatial and global contextual dependencies that are crucial for tracking cells. To overcome these limitations, we introduce Cell-TRACTR (Transformer with Attention for Cell Tracking and Recognition), a novel deep learning model that uses a transformer-based architecture. The attention mechanism inherent in transformers facilitates long-range connections, effectively linking features across different spatial regions, which is critical for robust cell tracking. Cell-TRACTR operates in an end-to-end manner, simultaneously segmenting and tracking cells without the need for post-processing. Alongside this model, we introduce the Cell-HOTA metric, an extension of the Higher Order Tracking Accuracy (HOTA) metric that we adapted to assess cell division. Cell-HOTA differs from standard cell tracking metrics by offering a balanced and easily interpretable assessment of detection, association, and division accuracy. We test our Cell-TRACTR model on datasets of bacteria growing within a defined microfluidic geometry and mammalian cells growing freely in two dimensions. Our results demonstrate that Cell-TRACTR exhibits excellent performance in tracking and division accuracy compared to state-of-the-art algorithms, while also matching traditional benchmarks in detection accuracy. This work establishes a new framework for employing transformer-based models in cell segmentation and tracking. Author Summary Understanding the growth, movement, and gene expression dynamics of individual cells is critical for studies in a wide range of areas, from antibiotic resistance to cancer. Monitoring individual cells can reveal unique insights that are obscured by population averages. Although modern microscopy techniques have vastly improved researchers’ ability to collect data, tracking individual cells over time remains a challenge, particularly due to complexities such as cell division and non-linear cell movements. To address this, we developed a new transformer-based model called Cell-TRACTR that can segment and track single cells without the need for post-processing. The strength of the transformer architecture lies in its attention mechanism, which integrates global context. Attention makes this model particularly well suited for tracking cells across a sequence of images. In addition to the Cell-TRACTR model, we introduce a new metric, Cell-HOTA, to evaluate tracking algorithms in terms of detection, association, and division accuracy. The metric breaks down performance into sub-metrics, helping researchers pinpoint the strengths and weaknesses of their tracking algorithm. When compared to state-of-the-art algorithms, Cell-TRACTR meets or exceeds many current benchmarks, offering excellent potential as a new tool for the analysis of series of images with single-cell resolution.","PeriodicalId":9124,"journal":{"name":"bioRxiv","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cell-TRACTR: A transformer-based model for end-to-end segmentation and tracking of cells\",\"authors\":\"Owen M. O’Connor, M. Dunlop\",\"doi\":\"10.1101/2024.07.11.603075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning-based methods for identifying and tracking cells within microscopy images have revolutionized the speed and throughput of data analysis. These methods for analyzing biological and medical data have capitalized on advances from the broader computer vision field. However, cell tracking can present unique challenges, with frequent cell division events and the need to track many objects with similar visual appearances complicating analysis. Existing architectures developed for cell tracking based on convolutional neural networks (CNNs) have tended to fall short in managing the spatial and global contextual dependencies that are crucial for tracking cells. To overcome these limitations, we introduce Cell-TRACTR (Transformer with Attention for Cell Tracking and Recognition), a novel deep learning model that uses a transformer-based architecture. The attention mechanism inherent in transformers facilitates long-range connections, effectively linking features across different spatial regions, which is critical for robust cell tracking. Cell-TRACTR operates in an end-to-end manner, simultaneously segmenting and tracking cells without the need for post-processing. Alongside this model, we introduce the Cell-HOTA metric, an extension of the Higher Order Tracking Accuracy (HOTA) metric that we adapted to assess cell division. Cell-HOTA differs from standard cell tracking metrics by offering a balanced and easily interpretable assessment of detection, association, and division accuracy. We test our Cell-TRACTR model on datasets of bacteria growing within a defined microfluidic geometry and mammalian cells growing freely in two dimensions. Our results demonstrate that Cell-TRACTR exhibits excellent performance in tracking and division accuracy compared to state-of-the-art algorithms, while also matching traditional benchmarks in detection accuracy. This work establishes a new framework for employing transformer-based models in cell segmentation and tracking. Author Summary Understanding the growth, movement, and gene expression dynamics of individual cells is critical for studies in a wide range of areas, from antibiotic resistance to cancer. Monitoring individual cells can reveal unique insights that are obscured by population averages. Although modern microscopy techniques have vastly improved researchers’ ability to collect data, tracking individual cells over time remains a challenge, particularly due to complexities such as cell division and non-linear cell movements. To address this, we developed a new transformer-based model called Cell-TRACTR that can segment and track single cells without the need for post-processing. The strength of the transformer architecture lies in its attention mechanism, which integrates global context. Attention makes this model particularly well suited for tracking cells across a sequence of images. In addition to the Cell-TRACTR model, we introduce a new metric, Cell-HOTA, to evaluate tracking algorithms in terms of detection, association, and division accuracy. The metric breaks down performance into sub-metrics, helping researchers pinpoint the strengths and weaknesses of their tracking algorithm. When compared to state-of-the-art algorithms, Cell-TRACTR meets or exceeds many current benchmarks, offering excellent potential as a new tool for the analysis of series of images with single-cell resolution.\",\"PeriodicalId\":9124,\"journal\":{\"name\":\"bioRxiv\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"bioRxiv\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.07.11.603075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"bioRxiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.07.11.603075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cell-TRACTR: A transformer-based model for end-to-end segmentation and tracking of cells
Deep learning-based methods for identifying and tracking cells within microscopy images have revolutionized the speed and throughput of data analysis. These methods for analyzing biological and medical data have capitalized on advances from the broader computer vision field. However, cell tracking can present unique challenges, with frequent cell division events and the need to track many objects with similar visual appearances complicating analysis. Existing architectures developed for cell tracking based on convolutional neural networks (CNNs) have tended to fall short in managing the spatial and global contextual dependencies that are crucial for tracking cells. To overcome these limitations, we introduce Cell-TRACTR (Transformer with Attention for Cell Tracking and Recognition), a novel deep learning model that uses a transformer-based architecture. The attention mechanism inherent in transformers facilitates long-range connections, effectively linking features across different spatial regions, which is critical for robust cell tracking. Cell-TRACTR operates in an end-to-end manner, simultaneously segmenting and tracking cells without the need for post-processing. Alongside this model, we introduce the Cell-HOTA metric, an extension of the Higher Order Tracking Accuracy (HOTA) metric that we adapted to assess cell division. Cell-HOTA differs from standard cell tracking metrics by offering a balanced and easily interpretable assessment of detection, association, and division accuracy. We test our Cell-TRACTR model on datasets of bacteria growing within a defined microfluidic geometry and mammalian cells growing freely in two dimensions. Our results demonstrate that Cell-TRACTR exhibits excellent performance in tracking and division accuracy compared to state-of-the-art algorithms, while also matching traditional benchmarks in detection accuracy. This work establishes a new framework for employing transformer-based models in cell segmentation and tracking. Author Summary Understanding the growth, movement, and gene expression dynamics of individual cells is critical for studies in a wide range of areas, from antibiotic resistance to cancer. Monitoring individual cells can reveal unique insights that are obscured by population averages. Although modern microscopy techniques have vastly improved researchers’ ability to collect data, tracking individual cells over time remains a challenge, particularly due to complexities such as cell division and non-linear cell movements. To address this, we developed a new transformer-based model called Cell-TRACTR that can segment and track single cells without the need for post-processing. The strength of the transformer architecture lies in its attention mechanism, which integrates global context. Attention makes this model particularly well suited for tracking cells across a sequence of images. In addition to the Cell-TRACTR model, we introduce a new metric, Cell-HOTA, to evaluate tracking algorithms in terms of detection, association, and division accuracy. The metric breaks down performance into sub-metrics, helping researchers pinpoint the strengths and weaknesses of their tracking algorithm. When compared to state-of-the-art algorithms, Cell-TRACTR meets or exceeds many current benchmarks, offering excellent potential as a new tool for the analysis of series of images with single-cell resolution.