Ziyao Zhang, Haoxiang Yang, J. K. Eshraghian, Jiayin Li, Ken-Tye Yong, D. Vigolo, Helen M. McGuire, Omid Kavehei
{"title":"Cell detection with convolutional spiking neural network for neuromorphic cytometry","authors":"Ziyao Zhang, Haoxiang Yang, J. K. Eshraghian, Jiayin Li, Ken-Tye Yong, D. Vigolo, Helen M. McGuire, Omid Kavehei","doi":"10.1063/5.0199514","DOIUrl":null,"url":null,"abstract":"Imaging flow cytometry (IFC) is an advanced cell-analytic technology offering rich spatial information and fluorescence intensity for multi-parametric characterization. Manual gating in cytometry data enables the classification of discrete populations from the sample based on extracted features. However, this expert-driven technique can be subjective and laborious, often presenting challenges in reproducibility and being inherently limited to bivariate analysis. Numerous AI-driven cell classifications have recently emerged to automate the process of including multivariate data with enhanced reproducibility and accuracy. Our previous work demonstrated the early development of neuromorphic imaging cytometry, evaluating its feasibility in resolving conventional frame-based imaging systems’ limitations in data redundancy, fluorescence sensitivity, and compromised throughput. Herein, we adopted a convolutional spiking neural network (SNN) combined with the YOLOv3 model (SNN-YOLO) to perform cell classification and detection on label-free samples under neuromorphic vision. Spiking techniques are inherently suitable post-processing techniques for neuromorphic vision sensing. The experiment was conducted with polystyrene-based microparticles, THP-1, and LL/2 cell lines. The network’s performance was compared with that of a traditional YOLOv3 model fed with event-generated frame data to serve as a baseline. In this work, our SNN-YOLO outperformed the YOLOv3 baseline by achieving the highest average class accuracy of 0.974, compared to 0.962 for YOLOv3. Both models reported comparable performances across other key metrics and should be further explored for future auto-gating strategies and cytometry applications.","PeriodicalId":502250,"journal":{"name":"APL Machine Learning","volume":" 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"APL Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1063/5.0199514","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Imaging flow cytometry (IFC) is an advanced cell-analytic technology offering rich spatial information and fluorescence intensity for multi-parametric characterization. Manual gating in cytometry data enables the classification of discrete populations from the sample based on extracted features. However, this expert-driven technique can be subjective and laborious, often presenting challenges in reproducibility and being inherently limited to bivariate analysis. Numerous AI-driven cell classifications have recently emerged to automate the process of including multivariate data with enhanced reproducibility and accuracy. Our previous work demonstrated the early development of neuromorphic imaging cytometry, evaluating its feasibility in resolving conventional frame-based imaging systems’ limitations in data redundancy, fluorescence sensitivity, and compromised throughput. Herein, we adopted a convolutional spiking neural network (SNN) combined with the YOLOv3 model (SNN-YOLO) to perform cell classification and detection on label-free samples under neuromorphic vision. Spiking techniques are inherently suitable post-processing techniques for neuromorphic vision sensing. The experiment was conducted with polystyrene-based microparticles, THP-1, and LL/2 cell lines. The network’s performance was compared with that of a traditional YOLOv3 model fed with event-generated frame data to serve as a baseline. In this work, our SNN-YOLO outperformed the YOLOv3 baseline by achieving the highest average class accuracy of 0.974, compared to 0.962 for YOLOv3. Both models reported comparable performances across other key metrics and should be further explored for future auto-gating strategies and cytometry applications.