Ning Sun , Peixian He , Jixin Liu , Lei Chai , Cong Wu , Xiujuan Liu
{"title":"Remote heart rate measurement based on video color magnification and spatiotemporal self-attention","authors":"Ning Sun , Peixian He , Jixin Liu , Lei Chai , Cong Wu , Xiujuan Liu","doi":"10.1016/j.bspc.2025.107677","DOIUrl":null,"url":null,"abstract":"<div><div>Remote photoplethysmography (rPPG) for heart rate measurement has garnered significant attention due to its non-contact advantages. The challenge in video-based remote heart rate measurement lies in accurately capturing subtle changes in facial color. We propose an end-to-end deep learning model named Video Color Magnification and Spatiotemporal Feature Extraction Network (VS-Net). VS-Net comprises three main modules: video color magnification, spatiotemporal self-attention feature extraction, and contrastive learning. The video color magnification module, implemented using a deep neural network, initially magnifies subtle facial color changes in the input video. The magnified color features are then fed into the spatiotemporal self-attention feature extraction module. This module utilizes a multi-head self-attention mechanism along with convolutional neural networks to locally and globally model information exchange across magnified video frames, capturing long-term dependencies and extracting spatiotemporal features. Additionally, the model incorporates a contrastive learning module designed to improve weak signal detection in facial videos. By generating positive and negative samples based on video frequency resampling, the model captures similarities and differences among input samples, thereby learning more robust semantic feature representations. Comprehensive experiments were conducted on three public datasets: UBFC-RPPG, PURE, and MAHNOB-HCI. The results demonstrate that VS-Net effectively extracts rPPG signals from facial videos and outperforms state-of-the-art methods in heart rate measurement.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107677"},"PeriodicalIF":4.9000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425001880","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Remote photoplethysmography (rPPG) for heart rate measurement has garnered significant attention due to its non-contact advantages. The challenge in video-based remote heart rate measurement lies in accurately capturing subtle changes in facial color. We propose an end-to-end deep learning model named Video Color Magnification and Spatiotemporal Feature Extraction Network (VS-Net). VS-Net comprises three main modules: video color magnification, spatiotemporal self-attention feature extraction, and contrastive learning. The video color magnification module, implemented using a deep neural network, initially magnifies subtle facial color changes in the input video. The magnified color features are then fed into the spatiotemporal self-attention feature extraction module. This module utilizes a multi-head self-attention mechanism along with convolutional neural networks to locally and globally model information exchange across magnified video frames, capturing long-term dependencies and extracting spatiotemporal features. Additionally, the model incorporates a contrastive learning module designed to improve weak signal detection in facial videos. By generating positive and negative samples based on video frequency resampling, the model captures similarities and differences among input samples, thereby learning more robust semantic feature representations. Comprehensive experiments were conducted on three public datasets: UBFC-RPPG, PURE, and MAHNOB-HCI. The results demonstrate that VS-Net effectively extracts rPPG signals from facial videos and outperforms state-of-the-art methods in heart rate measurement.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.