Hiromichi Ichige, M. Toyoura, K. Go, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao
The number of individuals with Age-related Macular Degeneration (AMD) is rapidly increasing. One of the main symptoms of AMD is "metamorphopsia," or distorted vision, which not only makes it difficult for individuals with AMD to do detailed-oriented tasks but also makes sufferers more vulnerable to certain risks in day-to-day life. Traditional clinical approaches to assess metamorphopsia have lacked mechanisms for quantifying the degree of distortion in space, making it impossible to know exactly how individuals with the condition see things. This paper proposes a new method for quantifying distortion in space and visualizing AMD patients' distorted views via line manipulation. By visualizing the distorted views stemming from metamorphopsia, the method gives doctors and others an intuitive picture of how patients see the world and thereby enables a broad range of options for treatment and support.
{"title":"Visual Assessment of Distorted View for Metamorphopsia Patient by Interactive Line Manipulation","authors":"Hiromichi Ichige, M. Toyoura, K. Go, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao","doi":"10.1109/CW.2019.00038","DOIUrl":"https://doi.org/10.1109/CW.2019.00038","url":null,"abstract":"The number of individuals with Age-related Macular Degeneration (AMD) is rapidly increasing. One of the main symptoms of AMD is \"metamorphopsia,\" or distorted vision, which not only makes it difficult for individuals with AMD to do detailed-oriented tasks but also makes sufferers more vulnerable to certain risks in day-to-day life. Traditional clinical approaches to assess metamorphopsia have lacked mechanisms for quantifying the degree of distortion in space, making it impossible to know exactly how individuals with the condition see things. This paper proposes a new method for quantifying distortion in space and visualizing AMD patients' distorted views via line manipulation. By visualizing the distorted views stemming from metamorphopsia, the method gives doctors and others an intuitive picture of how patients see the world and thereby enables a broad range of options for treatment and support.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129818670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research project developed a Virtual Reality (VR) training simulator for paramedic procedures. Currently needle cricothyroidotomy and chest drain are modelled, which could form part of a larger system for training paramedics with VR in various other procedures. The simulator incorporates a number of advanced VR technologies including Oculus Rift and haptic feedback. We have gained input and feedback from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.
{"title":"ParaVR: Paramedic Virtual Reality Training Simulator","authors":"N. Vaughan, N. John, N. Rees","doi":"10.1109/CW.2019.00012","DOIUrl":"https://doi.org/10.1109/CW.2019.00012","url":null,"abstract":"This research project developed a Virtual Reality (VR) training simulator for paramedic procedures. Currently needle cricothyroidotomy and chest drain are modelled, which could form part of a larger system for training paramedics with VR in various other procedures. The simulator incorporates a number of advanced VR technologies including Oculus Rift and haptic feedback. We have gained input and feedback from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127666245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a method for semi-automatically creating an anime-like 3D face model from a single illustration. In the proposed method, principal component analysis (PCA) is applied to existing anime-like 3D models to obtain base models for generating natural 3D models. To align the dimensions of the data and make geometric correspondence, a template model is deformed using a modified Nonrigid Iterative Closest Point (NICP) method. Then, the coefficients of the linear combination of the base models are estimated by minimizing the difference between the rendered image of the 3D model with the coefficients and the input illustration using edge-based matching. We confirmed that our method was able to generate a natural anime-like 3D face models which has similar eye and face shapes to those of the input illustration.
{"title":"Semi-Automatic Creation of an Anime-Like 3D Face Model from a Single Illustration","authors":"T. Niki, T. Komuro","doi":"10.1109/CW.2019.00017","DOIUrl":"https://doi.org/10.1109/CW.2019.00017","url":null,"abstract":"In this paper, we propose a method for semi-automatically creating an anime-like 3D face model from a single illustration. In the proposed method, principal component analysis (PCA) is applied to existing anime-like 3D models to obtain base models for generating natural 3D models. To align the dimensions of the data and make geometric correspondence, a template model is deformed using a modified Nonrigid Iterative Closest Point (NICP) method. Then, the coefficients of the linear combination of the base models are estimated by minimizing the difference between the rendered image of the 3D model with the coefficients and the input illustration using edge-based matching. We confirmed that our method was able to generate a natural anime-like 3D face models which has similar eye and face shapes to those of the input illustration.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122618249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Volkau, A. Mujeeb, Wenting Dai, Marius Erdt, A. Sourin
Automatic optical inspection for manufacturing traditionally was based on computer vision. However, there are emerging attempts to do it using deep learning approach. Deep convolutional neural network allows to learn semantic image features which could be used for defect detection in products. In contrast to the existing approaches where supervised or semi-supervised training is done on thousands of images of defects, we investigate whether unsupervised deep learning model for defect detection could be trained with orders of magnitude smaller amount of representative defect-free samples (tenths rather than thousands). This research is motivated by the fact that collection of large amounts of defective samples is difficult and expensive. Our model undergoes only one-class training and aims to extract distinctive semantic features from the normal samples in an unsupervised manner. We propose a variant of transfer learning, that consists of combination of unsupervised learning used upon VGG16 with pre-trained on ImageNet weight coefficients. To demonstrate a defect detection, we used a set of Printed Circuit Boards (PCBs) with different types of defects - scratch, missing washer/extra hole, abrasion, broken PCB edge. The trained model allows us to make clusters of normal internal representations of features of PCB in high-dimensional feature space, and to localize defective patches in PCB image based on distance from normal clusters. Initial results show that more than 90% of defects were detected.
{"title":"Detection Defect in Printed Circuit Boards using Unsupervised Feature Extraction Upon Transfer Learning","authors":"I. Volkau, A. Mujeeb, Wenting Dai, Marius Erdt, A. Sourin","doi":"10.1109/CW.2019.00025","DOIUrl":"https://doi.org/10.1109/CW.2019.00025","url":null,"abstract":"Automatic optical inspection for manufacturing traditionally was based on computer vision. However, there are emerging attempts to do it using deep learning approach. Deep convolutional neural network allows to learn semantic image features which could be used for defect detection in products. In contrast to the existing approaches where supervised or semi-supervised training is done on thousands of images of defects, we investigate whether unsupervised deep learning model for defect detection could be trained with orders of magnitude smaller amount of representative defect-free samples (tenths rather than thousands). This research is motivated by the fact that collection of large amounts of defective samples is difficult and expensive. Our model undergoes only one-class training and aims to extract distinctive semantic features from the normal samples in an unsupervised manner. We propose a variant of transfer learning, that consists of combination of unsupervised learning used upon VGG16 with pre-trained on ImageNet weight coefficients. To demonstrate a defect detection, we used a set of Printed Circuit Boards (PCBs) with different types of defects - scratch, missing washer/extra hole, abrasion, broken PCB edge. The trained model allows us to make clusters of normal internal representations of features of PCB in high-dimensional feature space, and to localize defective patches in PCB image based on distance from normal clusters. Initial results show that more than 90% of defects were detected.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130068150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abir Mhenni, Denis Migdal, E. Cherrier, C. Rosenberger, N. Amara
The attacks considered for keystroke dynamics study especially adaptive strategies have commonly treated impersonation attempts known as zero-effort attacks. These attacks are generally the acquisition of other users of the same database while typing the same password without intending to impersonate the genuine user account. To deal with more realistic scenarios, we are interested in this paper to study the robustness of an adaptive strategy against four types of imposter attacks: zero-effort, spoof, playback and synthetic applied to the WEBGREYC database. Experimental results show that 1) playback and synthetic attacks are the most dangerous and increase the EER rates compared to the other attacks; 2) we also find that the impact of these attacks is more pronounced when the percentages of imposter samples are greater than those of genuine ones; 3) the spoof attacks achieve alarmingly higher FMR, FNMR, and EER rates compared to zero-effort impostor attacks; 4) FMR, FNMR, and EER are higher when the percentage of attacks increases; 5) the attacks belonging to the same user are more dangerous than those of different users in particular when the percentage of the attacks increases. In light of our results, we point out that the traditional attacks considered in research on keystroke-based authentication must evolve according to the evolution of the attacks of nowadays password-based applications.
{"title":"Vulnerability of Adaptive Strategies of Keystroke Dynamics Based Authentication Against Different Attack Types","authors":"Abir Mhenni, Denis Migdal, E. Cherrier, C. Rosenberger, N. Amara","doi":"10.1109/CW.2019.00052","DOIUrl":"https://doi.org/10.1109/CW.2019.00052","url":null,"abstract":"The attacks considered for keystroke dynamics study especially adaptive strategies have commonly treated impersonation attempts known as zero-effort attacks. These attacks are generally the acquisition of other users of the same database while typing the same password without intending to impersonate the genuine user account. To deal with more realistic scenarios, we are interested in this paper to study the robustness of an adaptive strategy against four types of imposter attacks: zero-effort, spoof, playback and synthetic applied to the WEBGREYC database. Experimental results show that 1) playback and synthetic attacks are the most dangerous and increase the EER rates compared to the other attacks; 2) we also find that the impact of these attacks is more pronounced when the percentages of imposter samples are greater than those of genuine ones; 3) the spoof attacks achieve alarmingly higher FMR, FNMR, and EER rates compared to zero-effort impostor attacks; 4) FMR, FNMR, and EER are higher when the percentage of attacks increases; 5) the attacks belonging to the same user are more dangerous than those of different users in particular when the percentage of the attacks increases. In light of our results, we point out that the traditional attacks considered in research on keystroke-based authentication must evolve according to the evolution of the attacks of nowadays password-based applications.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115298439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daiki Hagimori, N. Isoyama, Shunsuke Yoshimoto, Nobuchika Sakata, K. Kiyokawa
In recent years, human augmentation has attracted much attention. One type of human augmentation, motion augmentation makes perceived motion larger than in reality, and it can be used for a variety of applications such as rehabilitation of motor functions of stroke patients and a more realistic experience in virtual reality (VR) such as redirected walking (RDW). However, as augmented motion becomes larger than the real motion, a variety of senses that accompany will be more inconsistent with those perceived from somatic sensations, which will cause a severe sense of discomfort. To address the problem, we focus on kinesthetic illusions that are psychological phenomena where a person feels as if his or her own body is moving. Kinesthetic illusions are expected to fill the gap between the intended augmented motion and perceived physical motion. However, it has not been explored if and how large kinesthetic illusions are produced while a user is moving their limbs voluntarily in VR. To expand the knowledge on kinesthetic illusions, we have conducted two user studies on the impact of tendon vibration and visual stimuli on kinesthetic illusions. First experiment confirmed that the perceived elbow angle becomes larger than the actual angle when presented with tendon vibration. Second experiment revealed that the increase of the perceived elbow angle was about 20 degrees when both tendon vibration and visual stimuli were presented whereas it was about 10 degrees when only visual stimuli were presented. Through these experiments, it has been confirmed that combining tendon vibration and visual stimulation enhances kinesthetic illusions.
{"title":"Combining Tendon Vibration and Visual Stimulation Enhances Kinesthetic Illusions","authors":"Daiki Hagimori, N. Isoyama, Shunsuke Yoshimoto, Nobuchika Sakata, K. Kiyokawa","doi":"10.1109/CW.2019.00029","DOIUrl":"https://doi.org/10.1109/CW.2019.00029","url":null,"abstract":"In recent years, human augmentation has attracted much attention. One type of human augmentation, motion augmentation makes perceived motion larger than in reality, and it can be used for a variety of applications such as rehabilitation of motor functions of stroke patients and a more realistic experience in virtual reality (VR) such as redirected walking (RDW). However, as augmented motion becomes larger than the real motion, a variety of senses that accompany will be more inconsistent with those perceived from somatic sensations, which will cause a severe sense of discomfort. To address the problem, we focus on kinesthetic illusions that are psychological phenomena where a person feels as if his or her own body is moving. Kinesthetic illusions are expected to fill the gap between the intended augmented motion and perceived physical motion. However, it has not been explored if and how large kinesthetic illusions are produced while a user is moving their limbs voluntarily in VR. To expand the knowledge on kinesthetic illusions, we have conducted two user studies on the impact of tendon vibration and visual stimuli on kinesthetic illusions. First experiment confirmed that the perceived elbow angle becomes larger than the actual angle when presented with tendon vibration. Second experiment revealed that the increase of the perceived elbow angle was about 20 degrees when both tendon vibration and visual stimuli were presented whereas it was about 10 degrees when only visual stimuli were presented. Through these experiments, it has been confirmed that combining tendon vibration and visual stimulation enhances kinesthetic illusions.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127327985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazi Mahmudul Hassan, M. Islam, Toshihisa Tanaka, M. I. Molla
Electroencephalography (EEG) is considered as a potential tool for diagnosis of epilepsy in clinical applications. Epileptic seizures occur irregularly and unpredictably. Its automatic detection in EEG recordings is highly demanding. In this work, multiband features are used to detect seizure with feedforward neural network (FfNN). The EEG signal is segmented into epochs of short duration and each epoch is decomposed into a number of subbands using discrete wavelet transform (DWT). Three features namely ellipse area of second-order difference plot, coefficient of variation and fluctuation index are computed from each subband signal. The features obtained from all subbands are combined to construct the feature vector. The FfNN is trained using the derived feature vector and seizure detection is performed with test data. The experiment is performed with publicly available dataset to evaluate the performance of the proposed method. The experimental results show the superiority of this method compared to the recently developed algorithms.
{"title":"Epileptic Seizure Detection from EEG Signals Using Multiband Features with Feedforward Neural Network","authors":"Kazi Mahmudul Hassan, M. Islam, Toshihisa Tanaka, M. I. Molla","doi":"10.1109/CW.2019.00046","DOIUrl":"https://doi.org/10.1109/CW.2019.00046","url":null,"abstract":"Electroencephalography (EEG) is considered as a potential tool for diagnosis of epilepsy in clinical applications. Epileptic seizures occur irregularly and unpredictably. Its automatic detection in EEG recordings is highly demanding. In this work, multiband features are used to detect seizure with feedforward neural network (FfNN). The EEG signal is segmented into epochs of short duration and each epoch is decomposed into a number of subbands using discrete wavelet transform (DWT). Three features namely ellipse area of second-order difference plot, coefficient of variation and fluctuation index are computed from each subband signal. The features obtained from all subbands are combined to construct the feature vector. The FfNN is trained using the derived feature vector and seizure detection is performed with test data. The experiment is performed with publicly available dataset to evaluate the performance of the proposed method. The experimental results show the superiority of this method compared to the recently developed algorithms.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inability to recognise the value of data in the form of digital objects and digital assets held in virtual social environments amplifies the data loss legacy challenges for older adults. This study examines the data storage and transfer issues that arise when people pass away. Older people experience data loss when they engage with social digital environments. Social computing has different legacy practices for the transfer and mobility of digital assets to physical assets. Recognizing the value of digital assets in their monetary, historical, sentimental, and legal characteristics is critical to the reduction of unnecessary data loss under legacy conditions.
{"title":"Social Computing and Older Adults: Challenges with Data Loss and Digital Legacies","authors":"D. Dissanayake, David M. Cook","doi":"10.1109/CW.2019.00035","DOIUrl":"https://doi.org/10.1109/CW.2019.00035","url":null,"abstract":"The inability to recognise the value of data in the form of digital objects and digital assets held in virtual social environments amplifies the data loss legacy challenges for older adults. This study examines the data storage and transfer issues that arise when people pass away. Older people experience data loss when they engage with social digital environments. Social computing has different legacy practices for the transfer and mobility of digital assets to physical assets. Recognizing the value of digital assets in their monetary, historical, sentimental, and legal characteristics is critical to the reduction of unnecessary data loss under legacy conditions.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"39 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127158536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yisi Liu, Zirui Lan, Jian Cui, O. Sourina, W. Müller-Wittig
Mental fatigue is common at work places, and it can lead to decreased attention, vigilance and cognitive performance, which is dangerous in the situations such as driving, vessel maneuvering, etc. By directly measuring the neurophysiological activities happening in the brain, electroencephalography (EEG) signal can be used as a good indicator of mental fatigue. A classic EEG-based brain state recognition system requires labeled data from the user to calibrate the classifier each time before the use. For fatigue recognition, we argue that it is not practical to do so since the induction of fatigue state is usually long and weary. It is desired that the system can be calibrated using readily available fatigue data, and be applied to a new user with adequate recognition accuracy. In this paper, we explore performance of cross-subject fatigue recognition algorithms using the recently published EEG dataset labeled with two levels of fatigue. We evaluate three categories of classification method: classic classifier such as logistic regression, transfer learning-enabled classifier using transfer component analysis, and deep-learning based classifier such as EEGNet. Our results show that transfer learning-enabled classifier can outperform the other two for cross-subject fatigue recognition on a consistent basis. Specifically, transfer component analysis (TCA) improves the cross-subject recognition accuracy to 72.70 % that is higher than using just logistic regression (LR) by 9.08 % and EEGNet by 8.72 - 12.86 %.
{"title":"EEG-Based Cross-Subject Mental Fatigue Recognition","authors":"Yisi Liu, Zirui Lan, Jian Cui, O. Sourina, W. Müller-Wittig","doi":"10.1109/CW.2019.00048","DOIUrl":"https://doi.org/10.1109/CW.2019.00048","url":null,"abstract":"Mental fatigue is common at work places, and it can lead to decreased attention, vigilance and cognitive performance, which is dangerous in the situations such as driving, vessel maneuvering, etc. By directly measuring the neurophysiological activities happening in the brain, electroencephalography (EEG) signal can be used as a good indicator of mental fatigue. A classic EEG-based brain state recognition system requires labeled data from the user to calibrate the classifier each time before the use. For fatigue recognition, we argue that it is not practical to do so since the induction of fatigue state is usually long and weary. It is desired that the system can be calibrated using readily available fatigue data, and be applied to a new user with adequate recognition accuracy. In this paper, we explore performance of cross-subject fatigue recognition algorithms using the recently published EEG dataset labeled with two levels of fatigue. We evaluate three categories of classification method: classic classifier such as logistic regression, transfer learning-enabled classifier using transfer component analysis, and deep-learning based classifier such as EEGNet. Our results show that transfer learning-enabled classifier can outperform the other two for cross-subject fatigue recognition on a consistent basis. Specifically, transfer component analysis (TCA) improves the cross-subject recognition accuracy to 72.70 % that is higher than using just logistic regression (LR) by 9.08 % and EEGNet by 8.72 - 12.86 %.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115769923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many studies propose strong user authentication based on biometric modalities. However, they often either, assume a trusted component, are modality-dependant, use only one biometric modality, are reversible, or does not enable the service to adapt the security on-the-fly. A recent work introduced the concept of Personal Identity Code Respecting Privacy (PICRP), a non-cryptographic and non-reversible signature computed from any arbitrary information. In this paper, we extend this concept with the use of Keystroke Dynamics, IP and GPS geo-location by optimizing the pre-processing and merging of collected information. We demonstrate the performance of the proposed approach through experimental results and we present an example of its usage.
{"title":"My Behavior is my Privacy & Secure Password !","authors":"Denis Migdal, C. Rosenberger","doi":"10.1109/CW.2019.00056","DOIUrl":"https://doi.org/10.1109/CW.2019.00056","url":null,"abstract":"Many studies propose strong user authentication based on biometric modalities. However, they often either, assume a trusted component, are modality-dependant, use only one biometric modality, are reversible, or does not enable the service to adapt the security on-the-fly. A recent work introduced the concept of Personal Identity Code Respecting Privacy (PICRP), a non-cryptographic and non-reversible signature computed from any arbitrary information. In this paper, we extend this concept with the use of Keystroke Dynamics, IP and GPS geo-location by optimizing the pre-processing and merging of collected information. We demonstrate the performance of the proposed approach through experimental results and we present an example of its usage.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131952086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}