{"title":"Hybrid Score- and Rank-Level Fusion for Person Identification using Face and ECG Data","authors":"Thomas Truong, Jonathan Graf, S. Yanushkevich","doi":"10.1109/EST.2019.8806206","DOIUrl":null,"url":null,"abstract":"Uni-modal identification systems are vulnerable to errors in sensor data collection and are therefore more likely to misidentify subjects. For instance, relying on data solely from an RGB face camera can cause problems in poorly lit environments or if subjects do not face the camera. Other identification methods such as electrocardiograms (ECG) have issues with improper lead connections to the skin. Errors in identification are minimized through the fusion of information gathered from both of these models. This paper proposes a methodology for combining the identification results of face and ECG data using Part A of the BioVid Heat Pain Database containing synchronized RGB-video and ECG data on 87 subjects. Using 10-fold cross-validation, face identification was 98.8% accurate, while the ECG identification was 96.1% accurate. By using a fusion approach the identification accuracy improved to 99.8%. Our proposed methodology allows for identification accuracies to be significantly improved by using disparate face and ECG models that have non-overlapping modalities.","PeriodicalId":102238,"journal":{"name":"2019 Eighth International Conference on Emerging Security Technologies (EST)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Eighth International Conference on Emerging Security Technologies (EST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EST.2019.8806206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Uni-modal identification systems are vulnerable to errors in sensor data collection and are therefore more likely to misidentify subjects. For instance, relying on data solely from an RGB face camera can cause problems in poorly lit environments or if subjects do not face the camera. Other identification methods such as electrocardiograms (ECG) have issues with improper lead connections to the skin. Errors in identification are minimized through the fusion of information gathered from both of these models. This paper proposes a methodology for combining the identification results of face and ECG data using Part A of the BioVid Heat Pain Database containing synchronized RGB-video and ECG data on 87 subjects. Using 10-fold cross-validation, face identification was 98.8% accurate, while the ECG identification was 96.1% accurate. By using a fusion approach the identification accuracy improved to 99.8%. Our proposed methodology allows for identification accuracies to be significantly improved by using disparate face and ECG models that have non-overlapping modalities.