Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336084
Pasupathy Vimalachandran, Hua Wang, Yanchun Zhang, Ben Heyward, Yueai Zhao
As a consequence of the huge advancement of the Electronic Health Record (EHR) in healthcare settings, the My Health Record (MHR) is introduced in Australia. However security and privacy of the MHR system have been encumbering the development of the system. Even though the MHR system is claimed as patient-centred and patient-controlled, there are several instances where healthcare providers (other than the usual provider) and system operators who maintain the system can easily access the system and these unauthorised accesses can lead to a breach of the privacy of the patients. This is one of the main concerns of the consumers that affect the uptake of the system. In this paper, we propose a patient centred MHR framework which requests authorisation from the patient to access their sensitive health information. The proposed model increases the involvement and satisfaction of the patients in their healthcare and also suggests mobile security system to give an online permission to access the MHR system.
{"title":"Preserving patient-centred controls in electronic health record systems: A reliance-based model implication","authors":"Pasupathy Vimalachandran, Hua Wang, Yanchun Zhang, Ben Heyward, Yueai Zhao","doi":"10.1109/ICOT.2017.8336084","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336084","url":null,"abstract":"As a consequence of the huge advancement of the Electronic Health Record (EHR) in healthcare settings, the My Health Record (MHR) is introduced in Australia. However security and privacy of the MHR system have been encumbering the development of the system. Even though the MHR system is claimed as patient-centred and patient-controlled, there are several instances where healthcare providers (other than the usual provider) and system operators who maintain the system can easily access the system and these unauthorised accesses can lead to a breach of the privacy of the patients. This is one of the main concerns of the consumers that affect the uptake of the system. In this paper, we propose a patient centred MHR framework which requests authorisation from the patient to access their sensitive health information. The proposed model increases the involvement and satisfaction of the patients in their healthcare and also suggests mobile security system to give an online permission to access the MHR system.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126583432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336125
D. Turnbull, Chitralekha Gupta, Dania Murad, Michael D. Barone, Ye Wang
Music is a fun and engaging form of entertainment and is often used by teachers to help students learn languages. In this paper, we describe how recent advances in music technology can be used to develop language learning applications that might help children, young adults, and adult learners grow their vocabularies, improve their pronunciation, and increase their cultural appreciation. We describe two apps that are under development: a karaoke app and a personalized radio app. Our goal is to provide teachers and students with new tools that are engaging, promote joyful learning, improve foreign language learning and mother tongue retention.
{"title":"Using music technology to motivate foreign language learning","authors":"D. Turnbull, Chitralekha Gupta, Dania Murad, Michael D. Barone, Ye Wang","doi":"10.1109/ICOT.2017.8336125","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336125","url":null,"abstract":"Music is a fun and engaging form of entertainment and is often used by teachers to help students learn languages. In this paper, we describe how recent advances in music technology can be used to develop language learning applications that might help children, young adults, and adult learners grow their vocabularies, improve their pronunciation, and increase their cultural appreciation. We describe two apps that are under development: a karaoke app and a personalized radio app. Our goal is to provide teachers and students with new tools that are engaging, promote joyful learning, improve foreign language learning and mother tongue retention.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124540578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336121
Wei Rao, Zhi Hao Lim, Qing Wang, Chenglin Xu, Xiaohai Tian, E. Chng, Haizhou Li
The real-time speech emotion recognition system is not only required to achieve the high accuracy, but also is needed to consider the memory requirement and running time in the practical application. This paper focuses on exploring the effective features with lower memory requirement and running time for the real-time speech emotion recognition system. To this end, the fixed-dimensional speech representations are considered because of its lower memory requirement and less computation cost. This paper investigates two types of fixed-dimensional speech representations which are high level descriptors and i-vectors and compares them with the conventional frame-based features low level descriptors in terms of accuracy and computation cost. Experimental results on IEMOCAP database show that although high level descriptors and i-vectors only contain the compact information comparing with low level descriptors, they achieve slightly better performance than low level descriptors. Experiments also demonstrate that the computation cost of i-vectors is much less than that of low level descriptors and high level descriptors.
{"title":"Investigation of fixed-dimensional speech representations for real-time speech emotion recognition system","authors":"Wei Rao, Zhi Hao Lim, Qing Wang, Chenglin Xu, Xiaohai Tian, E. Chng, Haizhou Li","doi":"10.1109/ICOT.2017.8336121","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336121","url":null,"abstract":"The real-time speech emotion recognition system is not only required to achieve the high accuracy, but also is needed to consider the memory requirement and running time in the practical application. This paper focuses on exploring the effective features with lower memory requirement and running time for the real-time speech emotion recognition system. To this end, the fixed-dimensional speech representations are considered because of its lower memory requirement and less computation cost. This paper investigates two types of fixed-dimensional speech representations which are high level descriptors and i-vectors and compares them with the conventional frame-based features low level descriptors in terms of accuracy and computation cost. Experimental results on IEMOCAP database show that although high level descriptors and i-vectors only contain the compact information comparing with low level descriptors, they achieve slightly better performance than low level descriptors. Experiments also demonstrate that the computation cost of i-vectors is much less than that of low level descriptors and high level descriptors.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121353182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336080
B. Celler, A. Argha, M. Varnfield, R. Jayasena
In this paper we summarize the results of the CSIRO National Telehealth Project, review the level of user acceptance and patient compliance with the telemonitoring regime, and discuss the unique characteristic of the telemonitoring system selected, with respect to patient centric user interfaces, novel data acquisition methods and the ability to record, store and review all graphical traces remotely. We hypothesize that these features are essential to enhance the diagnostic capabilities of the care coordinators, ensure a high level of patient compliance and improve the quality of the measurements as well as improve patient self-management.
{"title":"The importance of at-home telemonitoring of vital signs for patients with chronic conditions","authors":"B. Celler, A. Argha, M. Varnfield, R. Jayasena","doi":"10.1109/ICOT.2017.8336080","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336080","url":null,"abstract":"In this paper we summarize the results of the CSIRO National Telehealth Project, review the level of user acceptance and patient compliance with the telemonitoring regime, and discuss the unique characteristic of the telemonitoring system selected, with respect to patient centric user interfaces, novel data acquisition methods and the ability to record, store and review all graphical traces remotely. We hypothesize that these features are essential to enhance the diagnostic capabilities of the care coordinators, ensure a high level of patient compliance and improve the quality of the measurements as well as improve patient self-management.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336076
H. Sharifzadeh, H. Mehdinezhad, Jacqueline Alleni, I. Mcloughlin, I. Ardekani
In this paper, we use the voice samples recorded from laryngectomised patients to develop a novel method for speech enhancement and regeneration of natural sounding speech for laryngectomees. By leveraging recent advances in computational methods for speech reconstruction, our proposed method takes advantages of both non-training and training-based approaches to improve the quality of reconstructed speech for voice-impaired individuals. Since the proposed method has been developed based on the samples obtained from post-laryngectomised patients (and not based on the characteristics of other alternative modes of speech such as whispers and pseudo-whispers), it can address the limitations of current computational methods to some extent. Furthermore, by focusing on English vowels, objective evaluations are carried out to show the efficiency of the proposed method.
{"title":"Formant smoothing for quality improvement of post-laryngectomised speech reconstruction","authors":"H. Sharifzadeh, H. Mehdinezhad, Jacqueline Alleni, I. Mcloughlin, I. Ardekani","doi":"10.1109/ICOT.2017.8336076","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336076","url":null,"abstract":"In this paper, we use the voice samples recorded from laryngectomised patients to develop a novel method for speech enhancement and regeneration of natural sounding speech for laryngectomees. By leveraging recent advances in computational methods for speech reconstruction, our proposed method takes advantages of both non-training and training-based approaches to improve the quality of reconstructed speech for voice-impaired individuals. Since the proposed method has been developed based on the samples obtained from post-laryngectomised patients (and not based on the characteristics of other alternative modes of speech such as whispers and pseudo-whispers), it can address the limitations of current computational methods to some extent. Furthermore, by focusing on English vowels, objective evaluations are carried out to show the efficiency of the proposed method.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130592288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336088
Bingbing Kang, Xin Dang, Ran Wei
Snoring sound is an essential signal of obstructive sleep apnea (OSA). In order to detect snoring and apnea events in sleep audio recordings, a novel hybrid neural networks based snoring detection methods are evaluated in this study. The proposed method using linear predict coding (LPC) and Mel-Frequency Cepstral Coefficients (MFCC) features. The dataset included full-night audio recordings from 24 individuals who acknowledged having snoring habits with the label of polysomnography result. This method was demonstrated experimentally to be effective for snoring and apnea event detection. The performance of the proposed method was evaluated by classifying different events (snoring, Apnea and silence) from the sleep sound recordings and comparing the classification against ground truth. The proposed algorithm was able to achieve an accuracy of 90.65% for detecting snoring events, 90.99% for Apnea, and 90.30% for silence.
{"title":"Snoring and apnea detection based on hybrid neural networks","authors":"Bingbing Kang, Xin Dang, Ran Wei","doi":"10.1109/ICOT.2017.8336088","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336088","url":null,"abstract":"Snoring sound is an essential signal of obstructive sleep apnea (OSA). In order to detect snoring and apnea events in sleep audio recordings, a novel hybrid neural networks based snoring detection methods are evaluated in this study. The proposed method using linear predict coding (LPC) and Mel-Frequency Cepstral Coefficients (MFCC) features. The dataset included full-night audio recordings from 24 individuals who acknowledged having snoring habits with the label of polysomnography result. This method was demonstrated experimentally to be effective for snoring and apnea event detection. The performance of the proposed method was evaluated by classifying different events (snoring, Apnea and silence) from the sleep sound recordings and comparing the classification against ground truth. The proposed algorithm was able to achieve an accuracy of 90.65% for detecting snoring events, 90.99% for Apnea, and 90.30% for silence.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133431678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336123
Yang-Yen Ou, A. Tsai, Jhing-Fa Wang, Po-Chien Lin
The technologies of computer vision provided the home robot with visual ability, is able to improve the friendly user experience in home robot application. The vision research tends to solve single problem for a particular area, such as emotion understanding, identity identification and object detection, etc. This paper proposed an integrated application of vision system for emotion and identity confirmation. The facial landmarks, RGB image, and skeleton information captured by Kinect are the input of integrated vision system. The facial landmarks are used for Action Unit (AU) detection since the emotion recognition is considered as a combination of action units. The RGB image and skeleton information are used for identity conformation. The main contributions are summarized as follows. 1) An integrated vision system is proposed for the user description. 2) Hierarchal-Architecture SVM is presented for analysis of facial action units. 3) The system uses facial image and skeleton information to enhance the ability of identity confirmation. Experiments are performed on online test with the average accuracy are obtained by 86.33% and 86.26%, respectively. The experimental results have demonstrated the effectiveness and efficiency of the proposed system in real time application.
{"title":"An integrated vision system on emotion understanding and identity confirmation","authors":"Yang-Yen Ou, A. Tsai, Jhing-Fa Wang, Po-Chien Lin","doi":"10.1109/ICOT.2017.8336123","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336123","url":null,"abstract":"The technologies of computer vision provided the home robot with visual ability, is able to improve the friendly user experience in home robot application. The vision research tends to solve single problem for a particular area, such as emotion understanding, identity identification and object detection, etc. This paper proposed an integrated application of vision system for emotion and identity confirmation. The facial landmarks, RGB image, and skeleton information captured by Kinect are the input of integrated vision system. The facial landmarks are used for Action Unit (AU) detection since the emotion recognition is considered as a combination of action units. The RGB image and skeleton information are used for identity conformation. The main contributions are summarized as follows. 1) An integrated vision system is proposed for the user description. 2) Hierarchal-Architecture SVM is presented for analysis of facial action units. 3) The system uses facial image and skeleton information to enhance the ability of identity confirmation. Experiments are performed on online test with the average accuracy are obtained by 86.33% and 86.26%, respectively. The experimental results have demonstrated the effectiveness and efficiency of the proposed system in real time application.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129113444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336091
Ming-Hsiang Su, Chung-Hsien Wu, Kun-Yi Huang, Qian-Bei Hong, H. Wang
According to demographic changes, the services designed for the elderly are becoming more needed than before and increasingly important. In previous work, social media or community-based question-answer data were generally used to build the chatbot. In this study, we collected the MHMC chitchat dataset from daily conversations with the elderly. Since people are free to say anything to the system, the collected sentences are converted into patterns in the preprocessing part to cover the variability of conversational sentences. Then, an LSTM-based multi-layer embedding model is used to extract the semantic information between words and sentences in a single turn with multiple sentences when chatting with the elderly. Finally, the Euclidean distance is employed to select a proper question pattern, which is further used to select the corresponding answer to respond to the elderly. For performance evaluation, five-fold cross-validation scheme was employed for training and evaluation. Experimental results show that the proposed method achieved an accuracy of 79.96% for top-1 response selection, which outperformed the traditional Okapi model.
{"title":"A chatbot using LSTM-based multi-layer embedding for elderly care","authors":"Ming-Hsiang Su, Chung-Hsien Wu, Kun-Yi Huang, Qian-Bei Hong, H. Wang","doi":"10.1109/ICOT.2017.8336091","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336091","url":null,"abstract":"According to demographic changes, the services designed for the elderly are becoming more needed than before and increasingly important. In previous work, social media or community-based question-answer data were generally used to build the chatbot. In this study, we collected the MHMC chitchat dataset from daily conversations with the elderly. Since people are free to say anything to the system, the collected sentences are converted into patterns in the preprocessing part to cover the variability of conversational sentences. Then, an LSTM-based multi-layer embedding model is used to extract the semantic information between words and sentences in a single turn with multiple sentences when chatting with the elderly. Finally, the Euclidean distance is employed to select a proper question pattern, which is further used to select the corresponding answer to respond to the elderly. For performance evaluation, five-fold cross-validation scheme was employed for training and evaluation. Experimental results show that the proposed method achieved an accuracy of 79.96% for top-1 response selection, which outperformed the traditional Okapi model.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117204563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336077
Zahra Hosseindokht, Mohsen Paryavi, E. Asadian, R. Mohammadpour, H. Rafii-Tabar, P. Sasanpour
A low cost nanostructure pressure sensor has been designed, fabricated and tested based on graphene structure. The sensor structure is composed of the period structure of graphene oxide/reduced graphene oxide. The structure is fabricated by laser scanning system, where a beam of laser will convert the graphene oxide substrate to reduced graphene oxide. The geometrical structure of the sensor has been designed and optimized using computational analysis with Finite Element Method. The results of measurements show that the sensor structure is capable of measuring 1.45 kPa. The proposed sensor structure can be exploited in the application of electronic skin, synthetic tissue and robotic structures accordingly.
{"title":"Pressure sensor based on patterned laser scribed reduced graphene oxide; experiment & modeling","authors":"Zahra Hosseindokht, Mohsen Paryavi, E. Asadian, R. Mohammadpour, H. Rafii-Tabar, P. Sasanpour","doi":"10.1109/ICOT.2017.8336077","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336077","url":null,"abstract":"A low cost nanostructure pressure sensor has been designed, fabricated and tested based on graphene structure. The sensor structure is composed of the period structure of graphene oxide/reduced graphene oxide. The structure is fabricated by laser scanning system, where a beam of laser will convert the graphene oxide substrate to reduced graphene oxide. The geometrical structure of the sensor has been designed and optimized using computational analysis with Finite Element Method. The results of measurements show that the sensor structure is capable of measuring 1.45 kPa. The proposed sensor structure can be exploited in the application of electronic skin, synthetic tissue and robotic structures accordingly.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127753668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-01DOI: 10.1109/ICOT.2017.8336117
Vedant Sandhu, Aung Aung Phyo Wai, C. Y. Ho
Although new forms of learning methods emerge embracing digital technologies, there is still no solution to objectively assess student's engagement, something pertinent to learning performance. Besides the traditional class questionnaire and exam, measuring attention or engagement using sensors, in real-time, is quickly growing interest. This paper investigates how multimodal sensors attributes to quantify engagement levels through a set of learning experiments. We conducted experiments with 10 high school students who participated in different activities that lasted about one hour, comprising of a 2-phase experiment. Phase 1 involved collecting training data for the classifier. While phase 2 required participants to complete two reading comprehension tests with passages they liked and disliked, simulating an e-Learning experience. We use commercial low-cost sensors such as EEG headband, desktop eye tracker, PPG and GSR sensors to collect multimodal data. Different features from different sensors were extracted and labelled using our experiment design and tasks measuring reaction time. Accuracies upwards of 90% were achieved while classifying the EEG data into 3-class engagement levels. We, thus, suggest leveraging multimodal sensors to quantify multi-dimensional indexes such as engagement, emotion etc., for real-time assessment of learning performance. We are hoping that our work paves ways for assessing learn performance in a multi-faceted criteria, encompassing different neural, physiological and psychological states
{"title":"Evaluation of learning performance by quantifying user's engagement investigation through low-cost multi-modal sensors","authors":"Vedant Sandhu, Aung Aung Phyo Wai, C. Y. Ho","doi":"10.1109/ICOT.2017.8336117","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336117","url":null,"abstract":"Although new forms of learning methods emerge embracing digital technologies, there is still no solution to objectively assess student's engagement, something pertinent to learning performance. Besides the traditional class questionnaire and exam, measuring attention or engagement using sensors, in real-time, is quickly growing interest. This paper investigates how multimodal sensors attributes to quantify engagement levels through a set of learning experiments. We conducted experiments with 10 high school students who participated in different activities that lasted about one hour, comprising of a 2-phase experiment. Phase 1 involved collecting training data for the classifier. While phase 2 required participants to complete two reading comprehension tests with passages they liked and disliked, simulating an e-Learning experience. We use commercial low-cost sensors such as EEG headband, desktop eye tracker, PPG and GSR sensors to collect multimodal data. Different features from different sensors were extracted and labelled using our experiment design and tasks measuring reaction time. Accuracies upwards of 90% were achieved while classifying the EEG data into 3-class engagement levels. We, thus, suggest leveraging multimodal sensors to quantify multi-dimensional indexes such as engagement, emotion etc., for real-time assessment of learning performance. We are hoping that our work paves ways for assessing learn performance in a multi-faceted criteria, encompassing different neural, physiological and psychological states","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125600185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}