Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00046
Chian C. Ho, Ming-Che Ho, Chuan-Yu Chang
For markerless indoor/outdoor Augmented Reality Navigation (ARN) technology, camera pose is inevitably the fundamental argument of positioning estimation and pose estimation, and floor plane is indispensably the fiducial target of image registration. This paper proposes ORB-visual-odometry positioning estimation and wall-floor-boundary image registration to make ARN more precise, reliable, and instantaneous. Experimental results show both ORB-visual-odometry positioning estimation and wall-floor-boundary image registration have higher accuracy and less latency than conventional well-known positioning estimation and image registration methods for ARN. On the other hand, these proposed two methods are seamlessly implemented on the handheld Android embedded platform and are smoothly verified to work well on the handheld indoor/outdoor augmented reality navigation device.
{"title":"Markerless Indoor/Outdoor Augmented Reality Navigation Device Based on ORB-Visual-Odometry Positioning Estimation and Wall-Floor-Boundary Image Registration","authors":"Chian C. Ho, Ming-Che Ho, Chuan-Yu Chang","doi":"10.1109/Ubi-Media.2019.00046","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00046","url":null,"abstract":"For markerless indoor/outdoor Augmented Reality Navigation (ARN) technology, camera pose is inevitably the fundamental argument of positioning estimation and pose estimation, and floor plane is indispensably the fiducial target of image registration. This paper proposes ORB-visual-odometry positioning estimation and wall-floor-boundary image registration to make ARN more precise, reliable, and instantaneous. Experimental results show both ORB-visual-odometry positioning estimation and wall-floor-boundary image registration have higher accuracy and less latency than conventional well-known positioning estimation and image registration methods for ARN. On the other hand, these proposed two methods are seamlessly implemented on the handheld Android embedded platform and are smoothly verified to work well on the handheld indoor/outdoor augmented reality navigation device.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115246415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00069
Nattaporn Thongsri, Yukun Bao
this research focuses on the factors motivate learners' intention to use blackboard mobile learning. The research objective is to investigate the main factors in terms of usefulness learning and individual characteristics in order to investigate the factors that affect blackboard mobile learning acceptance among university students in Thailand. Based on the quantitative research method this study obtained behavior intention score from undergraduate students in Thailand. The Partial Least Squares method, a statistical analysis technique based on the Structural Equation Model (SEM), was used to analyse the data. The results of 314 undergraduate students were found that mobile self-efficacy had the significant effect on perceived ease of use of blackboard mobile learning while convenience had the significant effect on perceived usefulness of blackboard mobile learning. In addition, the main variable from the technology acceptance model namely perceived ease of use and perceived usefulness had positively related to user intention to blackboard mobile learning.
{"title":"What Motivates Learners' Intention to Use Blackboard Mobile Learning (BML)?: Evidence from Thailand","authors":"Nattaporn Thongsri, Yukun Bao","doi":"10.1109/Ubi-Media.2019.00069","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00069","url":null,"abstract":"this research focuses on the factors motivate learners' intention to use blackboard mobile learning. The research objective is to investigate the main factors in terms of usefulness learning and individual characteristics in order to investigate the factors that affect blackboard mobile learning acceptance among university students in Thailand. Based on the quantitative research method this study obtained behavior intention score from undergraduate students in Thailand. The Partial Least Squares method, a statistical analysis technique based on the Structural Equation Model (SEM), was used to analyse the data. The results of 314 undergraduate students were found that mobile self-efficacy had the significant effect on perceived ease of use of blackboard mobile learning while convenience had the significant effect on perceived usefulness of blackboard mobile learning. In addition, the main variable from the technology acceptance model namely perceived ease of use and perceived usefulness had positively related to user intention to blackboard mobile learning.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121901211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00011
Rui Xu, Zhenjiang Zhang, Bo Shen, Yanran Zeng, Chenyang Dai
Edge computing has obvious advantages in heterogeneity, low latency and dense network access. Meanwhile, edge device has complex structure and weak storage capacity. A lot of security threats still exist in the process of storing sensitive data. This paper proposes a method of combining fountain code and XOR encryption to the data storage in edge devices. First, the source file is encrypted by XOR encryption. Then we divide the ciphertext into multiple ciphertext data blocks. After encoding ciphertext data blocks, we mix them with coding blocks and distribute them on multiple edge devices. When we receive enough data blocks, we can recover the source file. The combination of XOR encryption and fountain code improves reliability and security of storage.
{"title":"Security Storage Based on Fountain Code and XOR Encryption in Edge Computing","authors":"Rui Xu, Zhenjiang Zhang, Bo Shen, Yanran Zeng, Chenyang Dai","doi":"10.1109/Ubi-Media.2019.00011","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00011","url":null,"abstract":"Edge computing has obvious advantages in heterogeneity, low latency and dense network access. Meanwhile, edge device has complex structure and weak storage capacity. A lot of security threats still exist in the process of storing sensitive data. This paper proposes a method of combining fountain code and XOR encryption to the data storage in edge devices. First, the source file is encrypted by XOR encryption. Then we divide the ciphertext into multiple ciphertext data blocks. After encoding ciphertext data blocks, we mix them with coding blocks and distribute them on multiple edge devices. When we receive enough data blocks, we can recover the source file. The combination of XOR encryption and fountain code improves reliability and security of storage.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129770579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00026
Wasura D. Wattearachchi, E. Hettiarachchi, K. Hewagamage
Emotions of a person depend on several factors like personality, relationships, mental health, and among them, context also plays a major role. The acceptability and usability of a system can be enhanced when the user emotions are considered based on the context. Most of the past research has been carried out related to the detection of emotions using facial expressions, gaze direction, heart rate analysis, Electroencephalogram (EEG) signals, text analytics, and keystroke dynamics along with mouse movements. This paper is a literature review which summarizes the previous research work in order to identify the critical success factors of the existing approaches when detecting user emotions through facial expressions and text data along with the context. Finally, as a conclusion, considering the advantages and drawbacks of the existing applications, the novel approaches that need to be followed to analyse user emotions to improve the usability of systems are proposed.
{"title":"Critical Success Factors of Analysing User Emotions to Improve the Usability of Systems","authors":"Wasura D. Wattearachchi, E. Hettiarachchi, K. Hewagamage","doi":"10.1109/Ubi-Media.2019.00026","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00026","url":null,"abstract":"Emotions of a person depend on several factors like personality, relationships, mental health, and among them, context also plays a major role. The acceptability and usability of a system can be enhanced when the user emotions are considered based on the context. Most of the past research has been carried out related to the detection of emotions using facial expressions, gaze direction, heart rate analysis, Electroencephalogram (EEG) signals, text analytics, and keystroke dynamics along with mouse movements. This paper is a literature review which summarizes the previous research work in order to identify the critical success factors of the existing approaches when detecting user emotions through facial expressions and text data along with the context. Finally, as a conclusion, considering the advantages and drawbacks of the existing applications, the novel approaches that need to be followed to analyse user emotions to improve the usability of systems are proposed.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"380 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123506017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00062
R. Shadiev, Narzikul Shadiev, Bakhora Shadieva, M. Fayziev, B. Reynolds
We applied STR, i.e. speech-to-text recognition, and SELT, i.e. speech-enabled language translation, technologies to lectures which were in English as Media of Instruction (EMI) to help students manage their cognitive load effectively. The goal was to compare how these two technologies help achieve this important task. To this end, two groups of students were recruited to attend two lectures (one at the intermediate and the other at the advanced level). STR-texts were shown to a control group while SELT-texts were shown to an experimental group. We compared the cognitive load of the students in both groups after each lecture. Our findings revealed that between-group difference in cognitive load was not significant; however, if student language ability was considered, a significant between-group difference existed during the advanced level lecture. We draw implications for researchers and teachers following these results.
{"title":"Comparing Effects of STR Versus SELT on Cognitive Load","authors":"R. Shadiev, Narzikul Shadiev, Bakhora Shadieva, M. Fayziev, B. Reynolds","doi":"10.1109/Ubi-Media.2019.00062","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00062","url":null,"abstract":"We applied STR, i.e. speech-to-text recognition, and SELT, i.e. speech-enabled language translation, technologies to lectures which were in English as Media of Instruction (EMI) to help students manage their cognitive load effectively. The goal was to compare how these two technologies help achieve this important task. To this end, two groups of students were recruited to attend two lectures (one at the intermediate and the other at the advanced level). STR-texts were shown to a control group while SELT-texts were shown to an experimental group. We compared the cognitive load of the students in both groups after each lecture. Our findings revealed that between-group difference in cognitive load was not significant; however, if student language ability was considered, a significant between-group difference existed during the advanced level lecture. We draw implications for researchers and teachers following these results.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121889056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00028
Hong-Nien Chen, Chuan-Feng Chiu, T. Shih, Chi-Yen Lin, Fitri Utaminingrum, Lin Hui
The variety and richness of clothing is an important thing for the brilliance and fluency of the performance. However, if we need to change our clothes for every show, changing clothes during the performance and even carrying clothes are both problems. To improve these problems and increase performance variability. In this paper, we propose a method for users to project virtual clothing on themselves using a computer with a projector. Kinect will capture the user's body and bones as well as each location and direction. The implementation the three steps method of a fast and efficient space coordinate transforming from camera coordinates to a real-world three-dimensional space coordinate to project and control virtual clothing in real time. Users can choose different costumes in our system. Anyone can easily wear virtual costumes during the show.
{"title":"The Design of Real-Time Digital Clothing Projection System","authors":"Hong-Nien Chen, Chuan-Feng Chiu, T. Shih, Chi-Yen Lin, Fitri Utaminingrum, Lin Hui","doi":"10.1109/Ubi-Media.2019.00028","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00028","url":null,"abstract":"The variety and richness of clothing is an important thing for the brilliance and fluency of the performance. However, if we need to change our clothes for every show, changing clothes during the performance and even carrying clothes are both problems. To improve these problems and increase performance variability. In this paper, we propose a method for users to project virtual clothing on themselves using a computer with a projector. Kinect will capture the user's body and bones as well as each location and direction. The implementation the three steps method of a fast and efficient space coordinate transforming from camera coordinates to a real-world three-dimensional space coordinate to project and control virtual clothing in real time. Users can choose different costumes in our system. Anyone can easily wear virtual costumes during the show.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124633110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00022
Rangana Jayashanka, K. Hewagamage, E. Hettiarachchi
Universities tend to design, create and evaluate new learning activities to enhance the learning environment for both students and teachers. Furthermore, higher educational institutes are finding solutions to create more personalized learning environments. There is an opportunity to improve the learning environment by applying Learning Analytics to educational data generated by learners interacting with Virtual Learning Environments (VLEs) and other digital learning tools. More personalized learning environments can be implemented by applying Learning Analytics on Learning Designs. Students' learning progress can be captured through Learning Dashboards in real-time by applying Learning Analytics during the course run-time. We can improve Blended Learning through the synergy between Learning Analytics and Learning Design. This paper provides an outline of a research project which aims to link Learning Design with Learning Analytics in order to enhance the learning environments and improve both teachers and student satisfaction. The paper provides the proposed framework of a Digital Learning Tool (Intelligent Interactive Visualizer) and sets out the program of research and development. The expected outcomes of the research project also discussed.
{"title":"An Intelligent Interactive Visualizer to Improve Blended Learning in Higher Education","authors":"Rangana Jayashanka, K. Hewagamage, E. Hettiarachchi","doi":"10.1109/Ubi-Media.2019.00022","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00022","url":null,"abstract":"Universities tend to design, create and evaluate new learning activities to enhance the learning environment for both students and teachers. Furthermore, higher educational institutes are finding solutions to create more personalized learning environments. There is an opportunity to improve the learning environment by applying Learning Analytics to educational data generated by learners interacting with Virtual Learning Environments (VLEs) and other digital learning tools. More personalized learning environments can be implemented by applying Learning Analytics on Learning Designs. Students' learning progress can be captured through Learning Dashboards in real-time by applying Learning Analytics during the course run-time. We can improve Blended Learning through the synergy between Learning Analytics and Learning Design. This paper provides an outline of a research project which aims to link Learning Design with Learning Analytics in order to enhance the learning environments and improve both teachers and student satisfaction. The paper provides the proposed framework of a Digital Learning Tool (Intelligent Interactive Visualizer) and sets out the program of research and development. The expected outcomes of the research project also discussed.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129547611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00021
Ahmad Wali Satria Bahari Johan Satria, Fitri Fitri, Timothy Timothy
The smart wheelchair helps the activities of someone who has a physical disability. The smart wheelchair has several capabilities, one of these capabilities is detecting obstacles in the form of stairs descent. Where if they are not aware of the stairs descent, they can fall, it will be an effect injuring. Therefore this study aims to create a system that is able to detect stairs descent based on digital image and provide notifications. The system was built using the Gray Level Co-occurrence Matrix method as feature extraction and Learning Vector Quantization to classify the stairs descent based on the digital image. From the results of the tests that have been carried out using 200 training data and 40 test data obtained an accuracy rate of 92.5 The faster average computation time is 0.02779 (s) for detecting the stairs descent.
{"title":"Stairs Descent Identification for Smart Wheelchair by Using GLCM and Learning Vector Quantization","authors":"Ahmad Wali Satria Bahari Johan Satria, Fitri Fitri, Timothy Timothy","doi":"10.1109/Ubi-Media.2019.00021","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00021","url":null,"abstract":"The smart wheelchair helps the activities of someone who has a physical disability. The smart wheelchair has several capabilities, one of these capabilities is detecting obstacles in the form of stairs descent. Where if they are not aware of the stairs descent, they can fall, it will be an effect injuring. Therefore this study aims to create a system that is able to detect stairs descent based on digital image and provide notifications. The system was built using the Gray Level Co-occurrence Matrix method as feature extraction and Learning Vector Quantization to classify the stairs descent based on the digital image. From the results of the tests that have been carried out using 200 training data and 40 test data obtained an accuracy rate of 92.5 The faster average computation time is 0.02779 (s) for detecting the stairs descent.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125469875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/Ubi-Media.2019.00045
S. Yeh, Si-Huei Lee, R. Chan, Shuya Chen
Virtual reality (VR)-based stroke rehabilitation has been shown to be effective in increasing motivation and functional performance in stroke patients. The new motion-sensing technology, Kinect, is cost effective and does not require the patient to wear sensors on the body, which increases freedom of movement. The objective of this study was to use Kinect technology to develop a VR stroke rehabilitation system with unilateral and bilateral tasks for recovering the function of the upper extremity. This study tested the feasibility, therapeutic effectiveness, and user acceptance of this technology. Two participants with various levels of motor severity received 30-minute stroke rehabilitation 3 times per week over 8 weeks (a total 24 training sessions). The Wolf Motor Function Test (WMFT), Test Évaluant la performance des Membres supérieurs des Personnes Âgées (TEMPA), and Fugl-Meyer Assessment of Physical Performance (FMA) were used to collect data before and after rehabilitation, and during a follow-up to detect the changes of functional performance. Questionnaires of user acceptance of the technology were administered. On completion of the rehabilitation program, using the proposed Kinect-based VR training system, WMFT, TEMPA, and FMA results increased for both participants. The technology acceptance questionnaires indicated that participants had strong intentions to continue using the proposed system for rehabilitation. We developed the first Kinect-based stroke rehabilitation for the upper extremity, and demonstrated its feasibility and effectiveness in improving upper extremity function after a stroke. A large-scale study should be conducted to test the effectiveness of the proposed system for stroke rehabilitation.
基于虚拟现实(VR)的脑卒中康复已被证明可以有效地提高脑卒中患者的动机和功能表现。新的动作感应技术Kinect具有成本效益,而且不需要患者在身上佩戴传感器,从而增加了行动的自由度。本研究的目的是利用Kinect技术开发一种具有单侧和双侧任务的VR中风康复系统,以恢复上肢功能。本研究测试了该技术的可行性、治疗效果和用户接受度。两名运动严重程度不同的参与者接受了30分钟的中风康复治疗,每周3次,持续8周(总共24次训练)。采用Wolf运动功能测验(WMFT)、TEMPA (Évaluant la performance des memes supersamrieurs des Personnes Âgées)和Fugl-Meyer Physical performance Assessment of Physical performance (FMA)收集康复前后的数据,并在随访期间检测功能表现的变化。对用户对该技术的接受程度进行问卷调查。在完成康复计划后,使用拟议的基于kinect的VR训练系统,两名参与者的WMFT、TEMPA和FMA结果均有所增加。技术接受问卷显示,参与者有强烈的意愿继续使用拟议的系统进行康复。我们开发了第一个基于kinect的上肢中风康复,并证明了其在改善中风后上肢功能方面的可行性和有效性。应该进行大规模的研究来测试所提出的系统对中风康复的有效性。
{"title":"A Kinect-Based System for Stroke Rehabilitation","authors":"S. Yeh, Si-Huei Lee, R. Chan, Shuya Chen","doi":"10.1109/Ubi-Media.2019.00045","DOIUrl":"https://doi.org/10.1109/Ubi-Media.2019.00045","url":null,"abstract":"Virtual reality (VR)-based stroke rehabilitation has been shown to be effective in increasing motivation and functional performance in stroke patients. The new motion-sensing technology, Kinect, is cost effective and does not require the patient to wear sensors on the body, which increases freedom of movement. The objective of this study was to use Kinect technology to develop a VR stroke rehabilitation system with unilateral and bilateral tasks for recovering the function of the upper extremity. This study tested the feasibility, therapeutic effectiveness, and user acceptance of this technology. Two participants with various levels of motor severity received 30-minute stroke rehabilitation 3 times per week over 8 weeks (a total 24 training sessions). The Wolf Motor Function Test (WMFT), Test Évaluant la performance des Membres supérieurs des Personnes Âgées (TEMPA), and Fugl-Meyer Assessment of Physical Performance (FMA) were used to collect data before and after rehabilitation, and during a follow-up to detect the changes of functional performance. Questionnaires of user acceptance of the technology were administered. On completion of the rehabilitation program, using the proposed Kinect-based VR training system, WMFT, TEMPA, and FMA results increased for both participants. The technology acceptance questionnaires indicated that participants had strong intentions to continue using the proposed system for rehabilitation. We developed the first Kinect-based stroke rehabilitation for the upper extremity, and demonstrated its feasibility and effectiveness in improving upper extremity function after a stroke. A large-scale study should be conducted to test the effectiveness of the proposed system for stroke rehabilitation.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126528862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}