{"title":"PassFrame: Generating image-based passwords from egocentric videos","authors":"Ngu Nguyen, S. Sigg","doi":"10.1109/PERCOMW.2017.7917518","DOIUrl":null,"url":null,"abstract":"In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional image-based passwords which benefit a variety of purposes such as unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest another alternative method if an eye-facing camera is not available. To evaluate our system, we perform experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable graphical passwords, the log-in effort for the user is comparable with approaches based on static challenges. We verified the authentication scheme in the presence of an informed attacker and observed that the effort is significantly higher than that of the legitimate user.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOMW.2017.7917518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper, we analyse first-person-view videos to develop a personalized user authentication mechanism. Our proposed algorithm generates provisional image-based passwords which benefit a variety of purposes such as unlocking a mobile device or fallback authentication. First, representative frames are extracted from the egocentric videos. Then, they are split into distinguishable segments before a clustering procedure is applied to discard repetitive scenes. The whole process aims to retain memorable images to form the authentication challenges. We integrate eye tracking data to select informative sequences of video frames and suggest another alternative method if an eye-facing camera is not available. To evaluate our system, we perform experiments in different settings including object-interaction activities and traveling contexts. Even though our mechanism produces variable graphical passwords, the log-in effort for the user is comparable with approaches based on static challenges. We verified the authentication scheme in the presence of an informed attacker and observed that the effort is significantly higher than that of the legitimate user.