首页 > 最新文献

2019 International Conference on Cyberworlds (CW)最新文献

英文 中文
A Kinect-Based Augmented Reality Game for Lower Limb Exercise 一款基于肢体运动的增强现实游戏
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00077
Y. Tokuyama, R. J. Rajapakse, Sachi Yamabe, K. Konno, Y. Hung
Augmented reality (AR) is where 3D virtual objects are integrated into a 3D real environment in real time. The augmented reality applications such as medical visualization, maintenance and repair, robot path planning, entertainment, military aircraft navigation, and targeting applications have been proposed. This paper introduces the development of an augmented reality game which allows the user to carry out lower limb exercise using a natural user interface based on Microsoft Kinect. The system has been designed as an augmented game where users can see themselves in a world augmented with virtual objects generated by computer graphics. The player sitting in a chair just has to step on a mole that appears and disappears by moving upward and downward randomly. It encourages the activities of a large number of lower limb muscles which will help prevent falls. It is also suitable for rehabilitation.
增强现实(AR)是将3D虚拟对象实时集成到3D真实环境中的技术。增强现实在医疗可视化、维护与维修、机器人路径规划、娱乐、军用飞机导航和目标定位等方面的应用已经被提出。本文介绍了一种基于微软Kinect的增强现实游戏的开发,该游戏允许用户使用自然用户界面进行下肢运动。该系统被设计成一个增强游戏,用户可以在一个由计算机图形生成的虚拟物体增强的世界中看到自己。坐在椅子上的玩家只需要踩上一颗鼹鼠,它会随机上下移动,时而出现时而消失。它鼓励大量下肢肌肉的活动,这将有助于防止跌倒。它也适用于康复。
{"title":"A Kinect-Based Augmented Reality Game for Lower Limb Exercise","authors":"Y. Tokuyama, R. J. Rajapakse, Sachi Yamabe, K. Konno, Y. Hung","doi":"10.1109/CW.2019.00077","DOIUrl":"https://doi.org/10.1109/CW.2019.00077","url":null,"abstract":"Augmented reality (AR) is where 3D virtual objects are integrated into a 3D real environment in real time. The augmented reality applications such as medical visualization, maintenance and repair, robot path planning, entertainment, military aircraft navigation, and targeting applications have been proposed. This paper introduces the development of an augmented reality game which allows the user to carry out lower limb exercise using a natural user interface based on Microsoft Kinect. The system has been designed as an augmented game where users can see themselves in a world augmented with virtual objects generated by computer graphics. The player sitting in a chair just has to step on a mole that appears and disappears by moving upward and downward randomly. It encourages the activities of a large number of lower limb muscles which will help prevent falls. It is also suitable for rehabilitation.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125020448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
EEG-Based Human Factors Evaluation of Air Traffic Control Operators (ATCOs) for Optimal Training 基于脑电图的空管人员最佳培训人因评价
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00049
Yisi Liu, Zirui Lan, F. Trapsilawati, O. Sourina, Chun-Hsien Chen, W. Müller-Wittig
To deal with the increasing demands in Air Traffic Control (ATC), new working place designs are proposed and developed that need novel human factors evaluation tools. In this paper, we propose a novel application of Electroencephalogram (EEG)-based emotion, workload, and stress recognition algorithms to investigate the optimal length of training for Air Traffic Control Officers (ATCOs) to learn working with three-dimensional (3D) display as a supplementary to the existing 2D display. We tested and applied the state-of-the-art EEG-based subject-dependent algorithms. The following experiment was carried out. Twelve ATCOs were recruited to take part in the experiment. The participants were in charge of the Terminal Control Area, providing navigation assistance to aircraft departing and approaching the airport using 2D and 3D displays. EEG data were recorded, and traditional human factors questionnaires were given to the participants after 15-minute, 60-minute, and 120-minute training. Different from the questionnaires, the EEG-based evaluation tools allow the recognition of emotions, workload, and stress with different temporal resolutions during the task performance by subjects. The results showed that 50-minute training could be enough for the ATCOs to learn the new display setting as they had relatively low stress and workload. The study demonstrated that there is a potential of applying the EEG-based human factors evaluation tools to assess novel system designs in addition to traditional questionnaire and feedback, which can be beneficial for future improvements and developments of the systems and interfaces.
为了应对空中交通管制(ATC)日益增长的需求,新的工作场所设计被提出和发展,需要新的人为因素评估工具。在本文中,我们提出了一种基于脑电图(EEG)的情绪、工作量和压力识别算法的新应用,以研究空中交通管制人员(atco)学习使用三维(3D)显示器作为现有二维显示器的补充的最佳训练长度。我们测试并应用了最先进的基于脑电图的主题相关算法。进行了以下实验。12名空中交通管制员被招募来参加实验。参加者负责终端管制区,使用2D和3D显示器为离境和接近机场的飞机提供导航协助。在训练15分钟、60分钟和120分钟后,记录脑电数据,并对参与者进行传统的人为因素问卷调查。与问卷调查不同的是,基于脑电图的评估工具允许被试在任务执行过程中对不同时间分辨率的情绪、工作量和压力进行识别。结果表明,50分钟的训练足以让atco学习新的显示设置,因为他们的压力和工作量相对较小。研究表明,除了传统的问卷调查和反馈之外,基于脑电图的人因评价工具有可能用于评价新系统设计,这对未来系统和界面的改进和发展有益。
{"title":"EEG-Based Human Factors Evaluation of Air Traffic Control Operators (ATCOs) for Optimal Training","authors":"Yisi Liu, Zirui Lan, F. Trapsilawati, O. Sourina, Chun-Hsien Chen, W. Müller-Wittig","doi":"10.1109/CW.2019.00049","DOIUrl":"https://doi.org/10.1109/CW.2019.00049","url":null,"abstract":"To deal with the increasing demands in Air Traffic Control (ATC), new working place designs are proposed and developed that need novel human factors evaluation tools. In this paper, we propose a novel application of Electroencephalogram (EEG)-based emotion, workload, and stress recognition algorithms to investigate the optimal length of training for Air Traffic Control Officers (ATCOs) to learn working with three-dimensional (3D) display as a supplementary to the existing 2D display. We tested and applied the state-of-the-art EEG-based subject-dependent algorithms. The following experiment was carried out. Twelve ATCOs were recruited to take part in the experiment. The participants were in charge of the Terminal Control Area, providing navigation assistance to aircraft departing and approaching the airport using 2D and 3D displays. EEG data were recorded, and traditional human factors questionnaires were given to the participants after 15-minute, 60-minute, and 120-minute training. Different from the questionnaires, the EEG-based evaluation tools allow the recognition of emotions, workload, and stress with different temporal resolutions during the task performance by subjects. The results showed that 50-minute training could be enough for the ATCOs to learn the new display setting as they had relatively low stress and workload. The study demonstrated that there is a potential of applying the EEG-based human factors evaluation tools to assess novel system designs in addition to traditional questionnaire and feedback, which can be beneficial for future improvements and developments of the systems and interfaces.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114026108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Ethnic Classification of Pakistani Face using Deep Learning 基于深度学习的巴基斯坦人脸民族分类研究
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00039
S. Jilani, H. Ugail, A. M. Bukar, Andrew Logan
Demographic-based identification plays an active role in the field of face identification. Over the past decade, machine learning algorithms have been used to investigate challenges surrouding ethnic classification for specific populations, such as African, Asian and Caucasian people. Ethnic classification for individuals of South Asian, Pakistani heritage, however, remains to be addressed. The present paper addresses a two-category (Pakistani Vs Non-Pakistani) classification task from a novel, purpose-built dataset. To the best of our knowledge, this work is the first to report a machine learning ethnic classification task with South Asian (Pakistani) faces. We conduted a series of experiments using deep learning algorithms (ResNet-50, ResNet-101 and ResNet-152) for feature extraction and a linear support vector machine (SVM) for classification. The experimental results demonstrate ResNet-101 achieves the highest performance accuracy of 99.2% for full-face ethnicity classification, followed closely by 91.7% and 95.7% for the nose and mouth respectively.
基于人口统计学的人脸识别在人脸识别领域发挥着积极的作用。在过去的十年里,机器学习算法被用于研究特定人群(如非洲人、亚洲人和高加索人)的种族分类挑战。然而,南亚、巴基斯坦血统个体的种族分类仍有待解决。本文从一个新颖的、专门构建的数据集中解决了两类(巴基斯坦Vs非巴基斯坦)分类任务。据我们所知,这项工作是第一个报告南亚(巴基斯坦)面孔的机器学习种族分类任务。我们使用深度学习算法(ResNet-50、ResNet-101和ResNet-152)进行特征提取,并使用线性支持向量机(SVM)进行分类,进行了一系列实验。实验结果表明,ResNet-101在全脸人种分类上的准确率最高,达到99.2%,其次是鼻子和嘴巴,准确率分别为91.7%和95.7%。
{"title":"On the Ethnic Classification of Pakistani Face using Deep Learning","authors":"S. Jilani, H. Ugail, A. M. Bukar, Andrew Logan","doi":"10.1109/CW.2019.00039","DOIUrl":"https://doi.org/10.1109/CW.2019.00039","url":null,"abstract":"Demographic-based identification plays an active role in the field of face identification. Over the past decade, machine learning algorithms have been used to investigate challenges surrouding ethnic classification for specific populations, such as African, Asian and Caucasian people. Ethnic classification for individuals of South Asian, Pakistani heritage, however, remains to be addressed. The present paper addresses a two-category (Pakistani Vs Non-Pakistani) classification task from a novel, purpose-built dataset. To the best of our knowledge, this work is the first to report a machine learning ethnic classification task with South Asian (Pakistani) faces. We conduted a series of experiments using deep learning algorithms (ResNet-50, ResNet-101 and ResNet-152) for feature extraction and a linear support vector machine (SVM) for classification. The experimental results demonstrate ResNet-101 achieves the highest performance accuracy of 99.2% for full-face ethnicity classification, followed closely by 91.7% and 95.7% for the nose and mouth respectively.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129100120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
La Petite Fee Cosmo: Learning Data Structures Through Game-Based Learning La Petite Fee Cosmo:通过基于游戏的学习学习数据结构
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00041
V. Kannappan, Owen Noel Newton Fernando, A. Chattopadhyay, Xavier Tan, Jeffrey Hong, S. H. Soon, Hui En Lye
This research aims to implement the productive failure teaching concept with interactive learning games as a method to nurture innovative teaching and learning. The research also aims to promote innovative approaches to learning and improving students' learning experience, and their understanding of linked list data structure concepts taught in computer science subjects since students do not widely understand this concept. A 2D bridge building puzzle game, “La Petite Fee Cosmo” was developed to assist students in not only understanding the underlying concepts of the linked list but also foster creative usage of the various functionalities of linked list in diverse situations.
本研究旨在透过互动学习游戏,将生产性失败教学理念落实到教学中,以培育创新的教与学。该研究还旨在促进创新的学习方法,提高学生的学习体验,以及他们对计算机科学学科中所教授的链表数据结构概念的理解,因为学生对这一概念的理解并不广泛。“La Petite Fee Cosmo”是一款2D架桥益智游戏,旨在帮助学生了解链表的基本概念,并培养他们在不同情况下创造性地使用链表的各种功能。
{"title":"La Petite Fee Cosmo: Learning Data Structures Through Game-Based Learning","authors":"V. Kannappan, Owen Noel Newton Fernando, A. Chattopadhyay, Xavier Tan, Jeffrey Hong, S. H. Soon, Hui En Lye","doi":"10.1109/CW.2019.00041","DOIUrl":"https://doi.org/10.1109/CW.2019.00041","url":null,"abstract":"This research aims to implement the productive failure teaching concept with interactive learning games as a method to nurture innovative teaching and learning. The research also aims to promote innovative approaches to learning and improving students' learning experience, and their understanding of linked list data structure concepts taught in computer science subjects since students do not widely understand this concept. A 2D bridge building puzzle game, “La Petite Fee Cosmo” was developed to assist students in not only understanding the underlying concepts of the linked list but also foster creative usage of the various functionalities of linked list in diverse situations.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133834919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Fonts Style Transfer using Conditional GAN 使用条件GAN的字体样式转移
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00075
Naho Sakao, Y. Dobashi
A font is an important element in designing printed materials including texts, such as documents, posters, leaflets, pamphlets, etc. Recently, many digital fonts with different styles are available for desktop publishing, but the number of Japanese-language fonts is smaller than that of European ones. This causes a problem when designing the materials including Japanese and European letters. Creating a new font is difficult and requires specialized knowledge and experience. Our research goal is to address this problem by transferring styles of the European fonts to Japanese characters by using a neural network. In this paper, we report some experimental results using the well-known deep learning framework called "pix2pix."
字体是设计印刷品(包括文件、海报、传单、小册子等)的重要元素。最近,桌面出版用的数字字体有很多,样式各异,但日语字体的数量比欧洲字体少。这在设计包括日文和欧文在内的材料时造成了问题。创建一个新的字体是困难的,需要专门的知识和经验。我们的研究目标是通过使用神经网络将欧洲字体的样式转移到日文字符上来解决这个问题。在本文中,我们报告了一些使用著名的深度学习框架“pix2pix”的实验结果。
{"title":"Fonts Style Transfer using Conditional GAN","authors":"Naho Sakao, Y. Dobashi","doi":"10.1109/CW.2019.00075","DOIUrl":"https://doi.org/10.1109/CW.2019.00075","url":null,"abstract":"A font is an important element in designing printed materials including texts, such as documents, posters, leaflets, pamphlets, etc. Recently, many digital fonts with different styles are available for desktop publishing, but the number of Japanese-language fonts is smaller than that of European ones. This causes a problem when designing the materials including Japanese and European letters. Creating a new font is difficult and requires specialized knowledge and experience. Our research goal is to address this problem by transferring styles of the European fonts to Japanese characters by using a neural network. In this paper, we report some experimental results using the well-known deep learning framework called \"pix2pix.\"","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131773015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Composite Sketch Recognition Using Multi-scale Hog Features and Semantic Attributes 基于多尺度Hog特征和语义属性的组合素描识别
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00028
Xinying Xue, Jiayi Xu, Xiaoyang Mao
Composite sketch recognition belongs to heterogeneous face recognition research, which is of great important in the field of criminal investigation. Because composite face sketch and photo belong to different modalities, robust representation of face feature cross different modalities is the key to recognition. Considering that composite sketch lacks texture details in some area, using texture features only may result in low recognition accuracy, this paper proposes a composite sketch recognition algorithm based on multi-scale Hog features and semantic attributes. Firstly, the global Hog features of the face and the local Hog features of each face component are extracted to represent the contour and detail features. Then the global and detail features are fused according to their importance at score level. Finally, semantic attributes are employed to reorder the matching results. The proposed algorithm is validated on PRIP-VSGC database and UoM-SGFS database, and achieves rank 10 identification accuracy of 88.6% and 96.7% respectively, which demonstrates that the proposed method outperforms other state-of-the-art methods.
合成素描识别属于异构人脸识别研究,在刑侦领域具有重要意义。由于合成人脸草图和照片属于不同的模态,因此跨不同模态的人脸特征鲁棒表示是人脸识别的关键。针对复合草图在某些区域缺乏纹理细节,仅使用纹理特征可能导致识别精度较低的问题,提出了一种基于多尺度Hog特征和语义属性的复合草图识别算法。首先,提取人脸的全局Hog特征和各分量的局部Hog特征来表示人脸的轮廓和细节特征;然后根据全局特征和细节特征在评分水平上的重要程度进行融合。最后,利用语义属性对匹配结果进行重新排序。本文算法在rip - vsgc数据库和UoM-SGFS数据库上进行了验证,10级识别准确率分别达到88.6%和96.7%,优于其他先进方法。
{"title":"Composite Sketch Recognition Using Multi-scale Hog Features and Semantic Attributes","authors":"Xinying Xue, Jiayi Xu, Xiaoyang Mao","doi":"10.1109/CW.2019.00028","DOIUrl":"https://doi.org/10.1109/CW.2019.00028","url":null,"abstract":"Composite sketch recognition belongs to heterogeneous face recognition research, which is of great important in the field of criminal investigation. Because composite face sketch and photo belong to different modalities, robust representation of face feature cross different modalities is the key to recognition. Considering that composite sketch lacks texture details in some area, using texture features only may result in low recognition accuracy, this paper proposes a composite sketch recognition algorithm based on multi-scale Hog features and semantic attributes. Firstly, the global Hog features of the face and the local Hog features of each face component are extracted to represent the contour and detail features. Then the global and detail features are fused according to their importance at score level. Finally, semantic attributes are employed to reorder the matching results. The proposed algorithm is validated on PRIP-VSGC database and UoM-SGFS database, and achieves rank 10 identification accuracy of 88.6% and 96.7% respectively, which demonstrates that the proposed method outperforms other state-of-the-art methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115564316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Bird Species Classification with Audio-Visual Data using CNN and Multiple Kernel Learning 基于CNN和多核学习的视听数据鸟类分类
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00022
B. Naranchimeg, Chao Zhang, T. Akashi
Recently, deep convolutional neural networks (CNN) have become a new standard in many machine learning applications not only in image but also in audio processing. However, most of the studies only explore a single type of training data. In this paper, we present a study on classifying bird species by combining deep neural features of both visual and audio data using kernel-based fusion method. Specifically, we extract deep neural features based on the activation values of an inner layer of CNN. We combine these features by multiple kernel learning (MKL) to perform the final classification. In the experiment, we train and evaluate our method on a CUB-200-2011 standard data set combined with our originally collected audio data set with respect to 200 bird species (classes). The experimental results indicate that our CNN+MKL method which utilizes the combination of both categories of data outperforms single-modality methods, some simple kernel combination methods, and the conventional early fusion method.
近年来,深度卷积神经网络(CNN)已经成为许多机器学习应用的新标准,不仅在图像处理领域,在音频处理领域也是如此。然而,大多数研究只探索单一类型的训练数据。本文采用基于核融合的方法,结合视觉和听觉数据的深度神经特征进行了鸟类物种分类研究。具体来说,我们基于CNN内层的激活值提取深度神经特征。我们通过多核学习(MKL)将这些特征组合起来进行最终分类。在实验中,我们在一个CUB-200-2011标准数据集上训练和评估了我们的方法,并结合了我们最初收集的200种鸟类(类)的音频数据集。实验结果表明,利用两类数据组合的CNN+MKL方法优于单模态方法、一些简单的核组合方法和传统的早期融合方法。
{"title":"Bird Species Classification with Audio-Visual Data using CNN and Multiple Kernel Learning","authors":"B. Naranchimeg, Chao Zhang, T. Akashi","doi":"10.1109/CW.2019.00022","DOIUrl":"https://doi.org/10.1109/CW.2019.00022","url":null,"abstract":"Recently, deep convolutional neural networks (CNN) have become a new standard in many machine learning applications not only in image but also in audio processing. However, most of the studies only explore a single type of training data. In this paper, we present a study on classifying bird species by combining deep neural features of both visual and audio data using kernel-based fusion method. Specifically, we extract deep neural features based on the activation values of an inner layer of CNN. We combine these features by multiple kernel learning (MKL) to perform the final classification. In the experiment, we train and evaluate our method on a CUB-200-2011 standard data set combined with our originally collected audio data set with respect to 200 bird species (classes). The experimental results indicate that our CNN+MKL method which utilizes the combination of both categories of data outperforms single-modality methods, some simple kernel combination methods, and the conventional early fusion method.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122665536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CPR Virtual Reality Training Simulator for Schools 学校心肺复苏虚拟现实训练模拟器
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00013
N. Vaughan, N. John, N. Rees
This research project developed a Virtual Reality (VR) training simulator for the CPR procedure. This is designed for use training school children. It can also form part of a larger system for training paramedics with VR. The simulator incorporates a number of advanced VR technologies including Oculus Rift and Leap motion. We have gained input from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.
本研究计划开发一套虚拟实境(VR)心肺复苏术训练模拟器。这是为训练学生而设计的。它也可以成为一个更大的系统的一部分,用VR培训护理人员。该模拟器融合了许多先进的虚拟现实技术,包括Oculus Rift和Leap motion。我们已经从NHS护理人员和几个相关组织获得了输入来设计系统,并提供初步工作原型的反馈和评估。
{"title":"CPR Virtual Reality Training Simulator for Schools","authors":"N. Vaughan, N. John, N. Rees","doi":"10.1109/CW.2019.00013","DOIUrl":"https://doi.org/10.1109/CW.2019.00013","url":null,"abstract":"This research project developed a Virtual Reality (VR) training simulator for the CPR procedure. This is designed for use training school children. It can also form part of a larger system for training paramedics with VR. The simulator incorporates a number of advanced VR technologies including Oculus Rift and Leap motion. We have gained input from NHS paramedics and several related organisation to design the system and provide feedback and evaluation of the preliminary working prototype.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129848545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Development of Easy Attachable Biological Information Measurement Device for Various Head Mounted Displays 用于各种头戴式显示器的易连接生物信息测量装置的研制
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00009
Masahiro Inazawa, Yuki Ban
It is important to measure the user's biological information when experiencing virtual reality (VR) content. By measuring such biological information during a VR stimulation, the body's response to the stimulation can be verified. In addition, it is possible to change the stimulation interactively by estimating the feeling from the measured biological information. However, the user load required to mount the sensor for biological information sensing under the existing VR content experience is significant, and the noise due to body movement is also a problem. In this paper, a biometric device that can be mounted on a head mounted display (HMD) was developed. Because an HMD is attached strongly to the face, it is thought to be robust to body movement and thus the mounting load of the sensor can be ignored. The developed device can simply be mounted on an HMD. A pulse waveform can be acquired from the optical pulse wave sensor arranged on the nose side of the HMD, and the respiration waveform can be acquired from a thermopile arranged in the nostril area of the HMD. We condacted the experiment to verified that a pulse wave and the respiration can be measured with sufficient accuracy for a calculation of the tension and excitement of the user. As a result of the experiment, it was confirmed that the pulse wave can be measured with an error of less than 1% in nine out of 14 users and that the respiration can be measured with an error of 0.6% if user does not move. The respiration was measured with high accuracy regardless of the type of HMD used.
在体验虚拟现实(VR)内容时,测量用户的生物信息非常重要。通过在VR刺激过程中测量这些生物信息,可以验证身体对刺激的反应。此外,还可以通过从测量的生物信息中估计感觉来交互式地改变刺激。然而,在现有的VR内容体验下,安装传感器进行生物信息传感所需的用户负荷很大,并且身体运动产生的噪声也是一个问题。本文研制了一种可安装在头戴式显示器(HMD)上的生物识别装置。由于头戴式头戴器与脸部紧密相连,因此被认为对身体运动具有鲁棒性,因此可以忽略传感器的安装负载。开发的设备可以简单地安装在HMD上。可以从设置在HMD鼻侧的光脉冲波传感器获取脉冲波形,可以从设置在HMD鼻孔区域的热电堆获取呼吸波形。我们进行了实验,以验证脉冲波和呼吸可以足够精确地测量用户的紧张和兴奋程度。实验结果证实,在14名用户中,有9名用户的脉搏波测量误差小于1%,如果用户不移动,呼吸测量误差可达0.6%。无论使用何种类型的HMD,呼吸测量都具有很高的准确性。
{"title":"Development of Easy Attachable Biological Information Measurement Device for Various Head Mounted Displays","authors":"Masahiro Inazawa, Yuki Ban","doi":"10.1109/CW.2019.00009","DOIUrl":"https://doi.org/10.1109/CW.2019.00009","url":null,"abstract":"It is important to measure the user's biological information when experiencing virtual reality (VR) content. By measuring such biological information during a VR stimulation, the body's response to the stimulation can be verified. In addition, it is possible to change the stimulation interactively by estimating the feeling from the measured biological information. However, the user load required to mount the sensor for biological information sensing under the existing VR content experience is significant, and the noise due to body movement is also a problem. In this paper, a biometric device that can be mounted on a head mounted display (HMD) was developed. Because an HMD is attached strongly to the face, it is thought to be robust to body movement and thus the mounting load of the sensor can be ignored. The developed device can simply be mounted on an HMD. A pulse waveform can be acquired from the optical pulse wave sensor arranged on the nose side of the HMD, and the respiration waveform can be acquired from a thermopile arranged in the nostril area of the HMD. We condacted the experiment to verified that a pulse wave and the respiration can be measured with sufficient accuracy for a calculation of the tension and excitement of the user. As a result of the experiment, it was confirmed that the pulse wave can be measured with an error of less than 1% in nine out of 14 users and that the respiration can be measured with an error of 0.6% if user does not move. The respiration was measured with high accuracy regardless of the type of HMD used.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130984756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Shoulder-Surfing Resistant Image-Based Authentication Scheme with a Brain-Computer Interface 一种基于脑机接口的抗肩冲浪图像认证方案
Pub Date : 2019-10-01 DOI: 10.1109/CW.2019.00061
Florian Gondesen, Matthias Marx, Ann-Christine Kycler
With the increasing availability of consumer brain-computer interfaces, new methods of authentication can be considered. In this paper, we present a shoulder surfing resistant means of entering a graphical password by measuring brain activity. The password is a subset of images displayed repeatedly by rapid serial visual presentation. The occurrence of a password image entails an event-related potential in the electroencephalogram, the P300 response. The P300 response is used to classify whether an image belongs to the password subset or not. We compare individual classifiers, trained with samples of a specific user, to general P300 classifiers, trained over all subjects. We evaluate the permanence of the classification results in three subsequent experiment sessions. The classification score significantly increases from the first to the third session. Comparing the use of natural photos or simple objects as stimuli shows no significant difference. In total, our authentication scheme achieves an equal error rate of about 10%. In the future, with increasing accuracy and proliferation, brain-computer interfaces could find practical application in alternative authentication methods.
随着消费者脑机接口的日益普及,可以考虑新的认证方法。在本文中,我们提出了一种抗肩冲浪的方法,通过测量大脑活动来输入图形密码。密码是通过快速连续视觉呈现反复显示的图像子集。密码图像的出现需要在脑电图中产生事件相关电位,即P300反应。P300响应用于对图像是否属于密码子集进行分类。我们比较单个分类器,与特定用户的样本训练,一般的P300分类器,训练在所有科目。我们在随后的三次实验中评估分类结果的持久性。从第一次到第三次,分类得分显著增加。比较使用自然照片和简单物体作为刺激没有显著差异。总的来说,我们的认证方案实现了大约10%的错误率。在未来,随着准确性的提高和普及,脑机接口可以在替代身份验证方法中找到实际应用。
{"title":"A Shoulder-Surfing Resistant Image-Based Authentication Scheme with a Brain-Computer Interface","authors":"Florian Gondesen, Matthias Marx, Ann-Christine Kycler","doi":"10.1109/CW.2019.00061","DOIUrl":"https://doi.org/10.1109/CW.2019.00061","url":null,"abstract":"With the increasing availability of consumer brain-computer interfaces, new methods of authentication can be considered. In this paper, we present a shoulder surfing resistant means of entering a graphical password by measuring brain activity. The password is a subset of images displayed repeatedly by rapid serial visual presentation. The occurrence of a password image entails an event-related potential in the electroencephalogram, the P300 response. The P300 response is used to classify whether an image belongs to the password subset or not. We compare individual classifiers, trained with samples of a specific user, to general P300 classifiers, trained over all subjects. We evaluate the permanence of the classification results in three subsequent experiment sessions. The classification score significantly increases from the first to the third session. Comparing the use of natural photos or simple objects as stimuli shows no significant difference. In total, our authentication scheme achieves an equal error rate of about 10%. In the future, with increasing accuracy and proliferation, brain-computer interfaces could find practical application in alternative authentication methods.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129456260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2019 International Conference on Cyberworlds (CW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1