Y. Tokuyama, R. J. Rajapakse, Sachi Yamabe, K. Konno, Y. Hung
Augmented reality (AR) is where 3D virtual objects are integrated into a 3D real environment in real time. The augmented reality applications such as medical visualization, maintenance and repair, robot path planning, entertainment, military aircraft navigation, and targeting applications have been proposed. This paper introduces the development of an augmented reality game which allows the user to carry out lower limb exercise using a natural user interface based on Microsoft Kinect. The system has been designed as an augmented game where users can see themselves in a world augmented with virtual objects generated by computer graphics. The player sitting in a chair just has to step on a mole that appears and disappears by moving upward and downward randomly. It encourages the activities of a large number of lower limb muscles which will help prevent falls. It is also suitable for rehabilitation.
{"title":"A Kinect-Based Augmented Reality Game for Lower Limb Exercise","authors":"Y. Tokuyama, R. J. Rajapakse, Sachi Yamabe, K. Konno, Y. Hung","doi":"10.1109/CW.2019.00077","DOIUrl":"https://doi.org/10.1109/CW.2019.00077","url":null,"abstract":"Augmented reality (AR) is where 3D virtual objects are integrated into a 3D real environment in real time. The augmented reality applications such as medical visualization, maintenance and repair, robot path planning, entertainment, military aircraft navigation, and targeting applications have been proposed. This paper introduces the development of an augmented reality game which allows the user to carry out lower limb exercise using a natural user interface based on Microsoft Kinect. The system has been designed as an augmented game where users can see themselves in a world augmented with virtual objects generated by computer graphics. The player sitting in a chair just has to step on a mole that appears and disappears by moving upward and downward randomly. It encourages the activities of a large number of lower limb muscles which will help prevent falls. It is also suitable for rehabilitation.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125020448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yisi Liu, Zirui Lan, F. Trapsilawati, O. Sourina, Chun-Hsien Chen, W. Müller-Wittig
To deal with the increasing demands in Air Traffic Control (ATC), new working place designs are proposed and developed that need novel human factors evaluation tools. In this paper, we propose a novel application of Electroencephalogram (EEG)-based emotion, workload, and stress recognition algorithms to investigate the optimal length of training for Air Traffic Control Officers (ATCOs) to learn working with three-dimensional (3D) display as a supplementary to the existing 2D display. We tested and applied the state-of-the-art EEG-based subject-dependent algorithms. The following experiment was carried out. Twelve ATCOs were recruited to take part in the experiment. The participants were in charge of the Terminal Control Area, providing navigation assistance to aircraft departing and approaching the airport using 2D and 3D displays. EEG data were recorded, and traditional human factors questionnaires were given to the participants after 15-minute, 60-minute, and 120-minute training. Different from the questionnaires, the EEG-based evaluation tools allow the recognition of emotions, workload, and stress with different temporal resolutions during the task performance by subjects. The results showed that 50-minute training could be enough for the ATCOs to learn the new display setting as they had relatively low stress and workload. The study demonstrated that there is a potential of applying the EEG-based human factors evaluation tools to assess novel system designs in addition to traditional questionnaire and feedback, which can be beneficial for future improvements and developments of the systems and interfaces.
{"title":"EEG-Based Human Factors Evaluation of Air Traffic Control Operators (ATCOs) for Optimal Training","authors":"Yisi Liu, Zirui Lan, F. Trapsilawati, O. Sourina, Chun-Hsien Chen, W. Müller-Wittig","doi":"10.1109/CW.2019.00049","DOIUrl":"https://doi.org/10.1109/CW.2019.00049","url":null,"abstract":"To deal with the increasing demands in Air Traffic Control (ATC), new working place designs are proposed and developed that need novel human factors evaluation tools. In this paper, we propose a novel application of Electroencephalogram (EEG)-based emotion, workload, and stress recognition algorithms to investigate the optimal length of training for Air Traffic Control Officers (ATCOs) to learn working with three-dimensional (3D) display as a supplementary to the existing 2D display. We tested and applied the state-of-the-art EEG-based subject-dependent algorithms. The following experiment was carried out. Twelve ATCOs were recruited to take part in the experiment. The participants were in charge of the Terminal Control Area, providing navigation assistance to aircraft departing and approaching the airport using 2D and 3D displays. EEG data were recorded, and traditional human factors questionnaires were given to the participants after 15-minute, 60-minute, and 120-minute training. Different from the questionnaires, the EEG-based evaluation tools allow the recognition of emotions, workload, and stress with different temporal resolutions during the task performance by subjects. The results showed that 50-minute training could be enough for the ATCOs to learn the new display setting as they had relatively low stress and workload. The study demonstrated that there is a potential of applying the EEG-based human factors evaluation tools to assess novel system designs in addition to traditional questionnaire and feedback, which can be beneficial for future improvements and developments of the systems and interfaces.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114026108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Demographic-based identification plays an active role in the field of face identification. Over the past decade, machine learning algorithms have been used to investigate challenges surrouding ethnic classification for specific populations, such as African, Asian and Caucasian people. Ethnic classification for individuals of South Asian, Pakistani heritage, however, remains to be addressed. The present paper addresses a two-category (Pakistani Vs Non-Pakistani) classification task from a novel, purpose-built dataset. To the best of our knowledge, this work is the first to report a machine learning ethnic classification task with South Asian (Pakistani) faces. We conduted a series of experiments using deep learning algorithms (ResNet-50, ResNet-101 and ResNet-152) for feature extraction and a linear support vector machine (SVM) for classification. The experimental results demonstrate ResNet-101 achieves the highest performance accuracy of 99.2% for full-face ethnicity classification, followed closely by 91.7% and 95.7% for the nose and mouth respectively.
{"title":"On the Ethnic Classification of Pakistani Face using Deep Learning","authors":"S. Jilani, H. Ugail, A. M. Bukar, Andrew Logan","doi":"10.1109/CW.2019.00039","DOIUrl":"https://doi.org/10.1109/CW.2019.00039","url":null,"abstract":"Demographic-based identification plays an active role in the field of face identification. Over the past decade, machine learning algorithms have been used to investigate challenges surrouding ethnic classification for specific populations, such as African, Asian and Caucasian people. Ethnic classification for individuals of South Asian, Pakistani heritage, however, remains to be addressed. The present paper addresses a two-category (Pakistani Vs Non-Pakistani) classification task from a novel, purpose-built dataset. To the best of our knowledge, this work is the first to report a machine learning ethnic classification task with South Asian (Pakistani) faces. We conduted a series of experiments using deep learning algorithms (ResNet-50, ResNet-101 and ResNet-152) for feature extraction and a linear support vector machine (SVM) for classification. The experimental results demonstrate ResNet-101 achieves the highest performance accuracy of 99.2% for full-face ethnicity classification, followed closely by 91.7% and 95.7% for the nose and mouth respectively.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129100120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Kannappan, Owen Noel Newton Fernando, A. Chattopadhyay, Xavier Tan, Jeffrey Hong, S. H. Soon, Hui En Lye
This research aims to implement the productive failure teaching concept with interactive learning games as a method to nurture innovative teaching and learning. The research also aims to promote innovative approaches to learning and improving students' learning experience, and their understanding of linked list data structure concepts taught in computer science subjects since students do not widely understand this concept. A 2D bridge building puzzle game, “La Petite Fee Cosmo” was developed to assist students in not only understanding the underlying concepts of the linked list but also foster creative usage of the various functionalities of linked list in diverse situations.
本研究旨在透过互动学习游戏,将生产性失败教学理念落实到教学中,以培育创新的教与学。该研究还旨在促进创新的学习方法,提高学生的学习体验,以及他们对计算机科学学科中所教授的链表数据结构概念的理解,因为学生对这一概念的理解并不广泛。“La Petite Fee Cosmo”是一款2D架桥益智游戏,旨在帮助学生了解链表的基本概念,并培养他们在不同情况下创造性地使用链表的各种功能。
{"title":"La Petite Fee Cosmo: Learning Data Structures Through Game-Based Learning","authors":"V. Kannappan, Owen Noel Newton Fernando, A. Chattopadhyay, Xavier Tan, Jeffrey Hong, S. H. Soon, Hui En Lye","doi":"10.1109/CW.2019.00041","DOIUrl":"https://doi.org/10.1109/CW.2019.00041","url":null,"abstract":"This research aims to implement the productive failure teaching concept with interactive learning games as a method to nurture innovative teaching and learning. The research also aims to promote innovative approaches to learning and improving students' learning experience, and their understanding of linked list data structure concepts taught in computer science subjects since students do not widely understand this concept. A 2D bridge building puzzle game, “La Petite Fee Cosmo” was developed to assist students in not only understanding the underlying concepts of the linked list but also foster creative usage of the various functionalities of linked list in diverse situations.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133834919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A font is an important element in designing printed materials including texts, such as documents, posters, leaflets, pamphlets, etc. Recently, many digital fonts with different styles are available for desktop publishing, but the number of Japanese-language fonts is smaller than that of European ones. This causes a problem when designing the materials including Japanese and European letters. Creating a new font is difficult and requires specialized knowledge and experience. Our research goal is to address this problem by transferring styles of the European fonts to Japanese characters by using a neural network. In this paper, we report some experimental results using the well-known deep learning framework called "pix2pix."
{"title":"Fonts Style Transfer using Conditional GAN","authors":"Naho Sakao, Y. Dobashi","doi":"10.1109/CW.2019.00075","DOIUrl":"https://doi.org/10.1109/CW.2019.00075","url":null,"abstract":"A font is an important element in designing printed materials including texts, such as documents, posters, leaflets, pamphlets, etc. Recently, many digital fonts with different styles are available for desktop publishing, but the number of Japanese-language fonts is smaller than that of European ones. This causes a problem when designing the materials including Japanese and European letters. Creating a new font is difficult and requires specialized knowledge and experience. Our research goal is to address this problem by transferring styles of the European fonts to Japanese characters by using a neural network. In this paper, we report some experimental results using the well-known deep learning framework called \"pix2pix.\"","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131773015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Composite sketch recognition belongs to heterogeneous face recognition research, which is of great important in the field of criminal investigation. Because composite face sketch and photo belong to different modalities, robust representation of face feature cross different modalities is the key to recognition. Considering that composite sketch lacks texture details in some area, using texture features only may result in low recognition accuracy, this paper proposes a composite sketch recognition algorithm based on multi-scale Hog features and semantic attributes. Firstly, the global Hog features of the face and the local Hog features of each face component are extracted to represent the contour and detail features. Then the global and detail features are fused according to their importance at score level. Finally, semantic attributes are employed to reorder the matching results. The proposed algorithm is validated on PRIP-VSGC database and UoM-SGFS database, and achieves rank 10 identification accuracy of 88.6% and 96.7% respectively, which demonstrates that the proposed method outperforms other state-of-the-art methods.
{"title":"Composite Sketch Recognition Using Multi-scale Hog Features and Semantic Attributes","authors":"Xinying Xue, Jiayi Xu, Xiaoyang Mao","doi":"10.1109/CW.2019.00028","DOIUrl":"https://doi.org/10.1109/CW.2019.00028","url":null,"abstract":"Composite sketch recognition belongs to heterogeneous face recognition research, which is of great important in the field of criminal investigation. Because composite face sketch and photo belong to different modalities, robust representation of face feature cross different modalities is the key to recognition. Considering that composite sketch lacks texture details in some area, using texture features only may result in low recognition accuracy, this paper proposes a composite sketch recognition algorithm based on multi-scale Hog features and semantic attributes. Firstly, the global Hog features of the face and the local Hog features of each face component are extracted to represent the contour and detail features. Then the global and detail features are fused according to their importance at score level. Finally, semantic attributes are employed to reorder the matching results. The proposed algorithm is validated on PRIP-VSGC database and UoM-SGFS database, and achieves rank 10 identification accuracy of 88.6% and 96.7% respectively, which demonstrates that the proposed method outperforms other state-of-the-art methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115564316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, deep convolutional neural networks (CNN) have become a new standard in many machine learning applications not only in image but also in audio processing. However, most of the studies only explore a single type of training data. In this paper, we present a study on classifying bird species by combining deep neural features of both visual and audio data using kernel-based fusion method. Specifically, we extract deep neural features based on the activation values of an inner layer of CNN. We combine these features by multiple kernel learning (MKL) to perform the final classification. In the experiment, we train and evaluate our method on a CUB-200-2011 standard data set combined with our originally collected audio data set with respect to 200 bird species (classes). The experimental results indicate that our CNN+MKL method which utilizes the combination of both categories of data outperforms single-modality methods, some simple kernel combination methods, and the conventional early fusion method.
{"title":"Bird Species Classification with Audio-Visual Data using CNN and Multiple Kernel Learning","authors":"B. Naranchimeg, Chao Zhang, T. Akashi","doi":"10.1109/CW.2019.00022","DOIUrl":"https://doi.org/10.1109/CW.2019.00022","url":null,"abstract":"Recently, deep convolutional neural networks (CNN) have become a new standard in many machine learning applications not only in image but also in audio processing. However, most of the studies only explore a single type of training data. In this paper, we present a study on classifying bird species by combining deep neural features of both visual and audio data using kernel-based fusion method. Specifically, we extract deep neural features based on the activation values of an inner layer of CNN. We combine these features by multiple kernel learning (MKL) to perform the final classification. In the experiment, we train and evaluate our method on a CUB-200-2011 standard data set combined with our originally collected audio data set with respect to 200 bird species (classes). The experimental results indicate that our CNN+MKL method which utilizes the combination of both categories of data outperforms single-modality methods, some simple kernel combination methods, and the conventional early fusion method.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122665536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research project developed a Virtual Reality (VR) training simulator for the CPR procedure. This is designed for use training school children. It can also form part of a larger system for training paramedics with VR. The simulator incorporates a number of advanced VR technologies including Oculus Rift and Leap motion. We have gained input from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.
{"title":"CPR Virtual Reality Training Simulator for Schools","authors":"N. Vaughan, N. John, N. Rees","doi":"10.1109/CW.2019.00013","DOIUrl":"https://doi.org/10.1109/CW.2019.00013","url":null,"abstract":"This research project developed a Virtual Reality (VR) training simulator for the CPR procedure. This is designed for use training school children. It can also form part of a larger system for training paramedics with VR. The simulator incorporates a number of advanced VR technologies including Oculus Rift and Leap motion. We have gained input from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129848545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is important to measure the user's biological information when experiencing virtual reality (VR) content. By measuring such biological information during a VR stimulation, the body's response to the stimulation can be verified. In addition, it is possible to change the stimulation interactively by estimating the feeling from the measured biological information. However, the user load required to mount the sensor for biological information sensing under the existing VR content experience is significant, and the noise due to body movement is also a problem. In this paper, a biometric device that can be mounted on a head mounted display (HMD) was developed. Because an HMD is attached strongly to the face, it is thought to be robust to body movement and thus the mounting load of the sensor can be ignored. The developed device can simply be mounted on an HMD. A pulse waveform can be acquired from the optical pulse wave sensor arranged on the nose side of the HMD, and the respiration waveform can be acquired from a thermopile arranged in the nostril area of the HMD. We condacted the experiment to verified that a pulse wave and the respiration can be measured with sufficient accuracy for a calculation of the tension and excitement of the user. As a result of the experiment, it was confirmed that the pulse wave can be measured with an error of less than 1% in nine out of 14 users and that the respiration can be measured with an error of 0.6% if user does not move. The respiration was measured with high accuracy regardless of the type of HMD used.
{"title":"Development of Easy Attachable Biological Information Measurement Device for Various Head Mounted Displays","authors":"Masahiro Inazawa, Yuki Ban","doi":"10.1109/CW.2019.00009","DOIUrl":"https://doi.org/10.1109/CW.2019.00009","url":null,"abstract":"It is important to measure the user's biological information when experiencing virtual reality (VR) content. By measuring such biological information during a VR stimulation, the body's response to the stimulation can be verified. In addition, it is possible to change the stimulation interactively by estimating the feeling from the measured biological information. However, the user load required to mount the sensor for biological information sensing under the existing VR content experience is significant, and the noise due to body movement is also a problem. In this paper, a biometric device that can be mounted on a head mounted display (HMD) was developed. Because an HMD is attached strongly to the face, it is thought to be robust to body movement and thus the mounting load of the sensor can be ignored. The developed device can simply be mounted on an HMD. A pulse waveform can be acquired from the optical pulse wave sensor arranged on the nose side of the HMD, and the respiration waveform can be acquired from a thermopile arranged in the nostril area of the HMD. We condacted the experiment to verified that a pulse wave and the respiration can be measured with sufficient accuracy for a calculation of the tension and excitement of the user. As a result of the experiment, it was confirmed that the pulse wave can be measured with an error of less than 1% in nine out of 14 users and that the respiration can be measured with an error of 0.6% if user does not move. The respiration was measured with high accuracy regardless of the type of HMD used.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130984756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing availability of consumer brain-computer interfaces, new methods of authentication can be considered. In this paper, we present a shoulder surfing resistant means of entering a graphical password by measuring brain activity. The password is a subset of images displayed repeatedly by rapid serial visual presentation. The occurrence of a password image entails an event-related potential in the electroencephalogram, the P300 response. The P300 response is used to classify whether an image belongs to the password subset or not. We compare individual classifiers, trained with samples of a specific user, to general P300 classifiers, trained over all subjects. We evaluate the permanence of the classification results in three subsequent experiment sessions. The classification score significantly increases from the first to the third session. Comparing the use of natural photos or simple objects as stimuli shows no significant difference. In total, our authentication scheme achieves an equal error rate of about 10%. In the future, with increasing accuracy and proliferation, brain-computer interfaces could find practical application in alternative authentication methods.
{"title":"A Shoulder-Surfing Resistant Image-Based Authentication Scheme with a Brain-Computer Interface","authors":"Florian Gondesen, Matthias Marx, Ann-Christine Kycler","doi":"10.1109/CW.2019.00061","DOIUrl":"https://doi.org/10.1109/CW.2019.00061","url":null,"abstract":"With the increasing availability of consumer brain-computer interfaces, new methods of authentication can be considered. In this paper, we present a shoulder surfing resistant means of entering a graphical password by measuring brain activity. The password is a subset of images displayed repeatedly by rapid serial visual presentation. The occurrence of a password image entails an event-related potential in the electroencephalogram, the P300 response. The P300 response is used to classify whether an image belongs to the password subset or not. We compare individual classifiers, trained with samples of a specific user, to general P300 classifiers, trained over all subjects. We evaluate the permanence of the classification results in three subsequent experiment sessions. The classification score significantly increases from the first to the third session. Comparing the use of natural photos or simple objects as stimuli shows no significant difference. In total, our authentication scheme achieves an equal error rate of about 10%. In the future, with increasing accuracy and proliferation, brain-computer interfaces could find practical application in alternative authentication methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129456260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}