首页 > 最新文献

2017 International Conference on Orange Technologies (ICOT)最新文献

英文 中文
Preserving patient-centred controls in electronic health record systems: A reliance-based model implication 在电子健康记录系统中保持以病人为中心的控制:基于依赖的模型含义
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336084
Pasupathy Vimalachandran, Hua Wang, Yanchun Zhang, Ben Heyward, Yueai Zhao
As a consequence of the huge advancement of the Electronic Health Record (EHR) in healthcare settings, the My Health Record (MHR) is introduced in Australia. However security and privacy of the MHR system have been encumbering the development of the system. Even though the MHR system is claimed as patient-centred and patient-controlled, there are several instances where healthcare providers (other than the usual provider) and system operators who maintain the system can easily access the system and these unauthorised accesses can lead to a breach of the privacy of the patients. This is one of the main concerns of the consumers that affect the uptake of the system. In this paper, we propose a patient centred MHR framework which requests authorisation from the patient to access their sensitive health information. The proposed model increases the involvement and satisfaction of the patients in their healthcare and also suggests mobile security system to give an online permission to access the MHR system.
由于电子健康记录(EHR)在医疗保健领域的巨大进步,澳大利亚引入了我的健康记录(MHR)。然而,MHR系统的安全性和隐私性问题一直困扰着该系统的发展。尽管MHR系统声称是以患者为中心和患者控制的,但在一些情况下,医疗保健提供者(除了通常的提供者)和维护系统的系统操作员可以很容易地访问系统,这些未经授权的访问可能导致侵犯患者的隐私。这是影响系统吸收的消费者的主要关注点之一。在本文中,我们提出了一个以患者为中心的MHR框架,该框架要求患者授权访问其敏感的健康信息。该模型提高了患者对医疗保健的参与度和满意度,并建议移动安全系统给予访问MHR系统的在线许可。
{"title":"Preserving patient-centred controls in electronic health record systems: A reliance-based model implication","authors":"Pasupathy Vimalachandran, Hua Wang, Yanchun Zhang, Ben Heyward, Yueai Zhao","doi":"10.1109/ICOT.2017.8336084","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336084","url":null,"abstract":"As a consequence of the huge advancement of the Electronic Health Record (EHR) in healthcare settings, the My Health Record (MHR) is introduced in Australia. However security and privacy of the MHR system have been encumbering the development of the system. Even though the MHR system is claimed as patient-centred and patient-controlled, there are several instances where healthcare providers (other than the usual provider) and system operators who maintain the system can easily access the system and these unauthorised accesses can lead to a breach of the privacy of the patients. This is one of the main concerns of the consumers that affect the uptake of the system. In this paper, we propose a patient centred MHR framework which requests authorisation from the patient to access their sensitive health information. The proposed model increases the involvement and satisfaction of the patients in their healthcare and also suggests mobile security system to give an online permission to access the MHR system.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126583432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Using music technology to motivate foreign language learning 利用音乐技术激励外语学习
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336125
D. Turnbull, Chitralekha Gupta, Dania Murad, Michael D. Barone, Ye Wang
Music is a fun and engaging form of entertainment and is often used by teachers to help students learn languages. In this paper, we describe how recent advances in music technology can be used to develop language learning applications that might help children, young adults, and adult learners grow their vocabularies, improve their pronunciation, and increase their cultural appreciation. We describe two apps that are under development: a karaoke app and a personalized radio app. Our goal is to provide teachers and students with new tools that are engaging, promote joyful learning, improve foreign language learning and mother tongue retention.
音乐是一种有趣而吸引人的娱乐形式,经常被老师用来帮助学生学习语言。在本文中,我们描述了音乐技术的最新进展如何用于开发语言学习应用程序,这些应用程序可以帮助儿童,年轻人和成年学习者增加词汇量,改善发音,并提高他们的文化鉴赏力。我们描述了两个正在开发的应用程序:卡拉ok应用程序和个性化广播应用程序。我们的目标是为教师和学生提供新的工具,让他们参与其中,促进快乐学习,提高外语学习和母语记忆。
{"title":"Using music technology to motivate foreign language learning","authors":"D. Turnbull, Chitralekha Gupta, Dania Murad, Michael D. Barone, Ye Wang","doi":"10.1109/ICOT.2017.8336125","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336125","url":null,"abstract":"Music is a fun and engaging form of entertainment and is often used by teachers to help students learn languages. In this paper, we describe how recent advances in music technology can be used to develop language learning applications that might help children, young adults, and adult learners grow their vocabularies, improve their pronunciation, and increase their cultural appreciation. We describe two apps that are under development: a karaoke app and a personalized radio app. Our goal is to provide teachers and students with new tools that are engaging, promote joyful learning, improve foreign language learning and mother tongue retention.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124540578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Investigation of fixed-dimensional speech representations for real-time speech emotion recognition system 实时语音情感识别系统中固定维语音表示的研究
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336121
Wei Rao, Zhi Hao Lim, Qing Wang, Chenglin Xu, Xiaohai Tian, E. Chng, Haizhou Li
The real-time speech emotion recognition system is not only required to achieve the high accuracy, but also is needed to consider the memory requirement and running time in the practical application. This paper focuses on exploring the effective features with lower memory requirement and running time for the real-time speech emotion recognition system. To this end, the fixed-dimensional speech representations are considered because of its lower memory requirement and less computation cost. This paper investigates two types of fixed-dimensional speech representations which are high level descriptors and i-vectors and compares them with the conventional frame-based features low level descriptors in terms of accuracy and computation cost. Experimental results on IEMOCAP database show that although high level descriptors and i-vectors only contain the compact information comparing with low level descriptors, they achieve slightly better performance than low level descriptors. Experiments also demonstrate that the computation cost of i-vectors is much less than that of low level descriptors and high level descriptors.
实时语音情感识别系统在实际应用中不仅需要达到较高的准确率,还需要考虑对内存的要求和运行时间。本文的重点是为实时语音情感识别系统探索具有较低内存要求和运行时间的有效特征。为此,考虑了固定维语音表示,因为它具有较低的内存需求和较低的计算成本。研究了高阶描述符和i向量两种固定维语音表示,并将其与传统的基于帧的低阶特征描述符在准确率和计算成本方面进行了比较。在IEMOCAP数据库上的实验结果表明,虽然高级描述符和i-vector只包含与低级描述符相比的紧凑信息,但它们的性能略好于低级描述符。实验还表明,与低级描述符和高级描述符相比,i向量的计算成本要小得多。
{"title":"Investigation of fixed-dimensional speech representations for real-time speech emotion recognition system","authors":"Wei Rao, Zhi Hao Lim, Qing Wang, Chenglin Xu, Xiaohai Tian, E. Chng, Haizhou Li","doi":"10.1109/ICOT.2017.8336121","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336121","url":null,"abstract":"The real-time speech emotion recognition system is not only required to achieve the high accuracy, but also is needed to consider the memory requirement and running time in the practical application. This paper focuses on exploring the effective features with lower memory requirement and running time for the real-time speech emotion recognition system. To this end, the fixed-dimensional speech representations are considered because of its lower memory requirement and less computation cost. This paper investigates two types of fixed-dimensional speech representations which are high level descriptors and i-vectors and compares them with the conventional frame-based features low level descriptors in terms of accuracy and computation cost. Experimental results on IEMOCAP database show that although high level descriptors and i-vectors only contain the compact information comparing with low level descriptors, they achieve slightly better performance than low level descriptors. Experiments also demonstrate that the computation cost of i-vectors is much less than that of low level descriptors and high level descriptors.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121353182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The importance of at-home telemonitoring of vital signs for patients with chronic conditions 在家远程监测慢性病患者生命体征的重要性
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336080
B. Celler, A. Argha, M. Varnfield, R. Jayasena
In this paper we summarize the results of the CSIRO National Telehealth Project, review the level of user acceptance and patient compliance with the telemonitoring regime, and discuss the unique characteristic of the telemonitoring system selected, with respect to patient centric user interfaces, novel data acquisition methods and the ability to record, store and review all graphical traces remotely. We hypothesize that these features are essential to enhance the diagnostic capabilities of the care coordinators, ensure a high level of patient compliance and improve the quality of the measurements as well as improve patient self-management.
在本文中,我们总结了CSIRO国家远程医疗项目的结果,审查了用户接受程度和患者对远程监测制度的依从性,并讨论了所选择的远程监测系统的独特特征,包括以患者为中心的用户界面、新颖的数据采集方法以及远程记录、存储和审查所有图形痕迹的能力。我们假设这些特征对于提高护理协调员的诊断能力,确保高水平的患者依从性,提高测量质量以及改善患者自我管理是必不可少的。
{"title":"The importance of at-home telemonitoring of vital signs for patients with chronic conditions","authors":"B. Celler, A. Argha, M. Varnfield, R. Jayasena","doi":"10.1109/ICOT.2017.8336080","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336080","url":null,"abstract":"In this paper we summarize the results of the CSIRO National Telehealth Project, review the level of user acceptance and patient compliance with the telemonitoring regime, and discuss the unique characteristic of the telemonitoring system selected, with respect to patient centric user interfaces, novel data acquisition methods and the ability to record, store and review all graphical traces remotely. We hypothesize that these features are essential to enhance the diagnostic capabilities of the care coordinators, ensure a high level of patient compliance and improve the quality of the measurements as well as improve patient self-management.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formant smoothing for quality improvement of post-laryngectomised speech reconstruction 用于喉切除后语音重建质量改善的峰状平滑
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336076
H. Sharifzadeh, H. Mehdinezhad, Jacqueline Alleni, I. Mcloughlin, I. Ardekani
In this paper, we use the voice samples recorded from laryngectomised patients to develop a novel method for speech enhancement and regeneration of natural sounding speech for laryngectomees. By leveraging recent advances in computational methods for speech reconstruction, our proposed method takes advantages of both non-training and training-based approaches to improve the quality of reconstructed speech for voice-impaired individuals. Since the proposed method has been developed based on the samples obtained from post-laryngectomised patients (and not based on the characteristics of other alternative modes of speech such as whispers and pseudo-whispers), it can address the limitations of current computational methods to some extent. Furthermore, by focusing on English vowels, objective evaluations are carried out to show the efficiency of the proposed method.
在本文中,我们利用喉切除术患者的语音样本,开发了一种新的方法来增强喉切除术患者的语音和自然发音语音的再生。通过利用语音重建计算方法的最新进展,我们提出的方法利用非训练和基于训练的方法来提高语音受损个体的重建语音质量。由于所提出的方法是基于喉切除术后患者获得的样本开发的(而不是基于其他替代语音模式(如低语和伪耳语)的特征),因此它可以在一定程度上解决当前计算方法的局限性。此外,本文还以英语元音为研究对象,对该方法的有效性进行了客观评价。
{"title":"Formant smoothing for quality improvement of post-laryngectomised speech reconstruction","authors":"H. Sharifzadeh, H. Mehdinezhad, Jacqueline Alleni, I. Mcloughlin, I. Ardekani","doi":"10.1109/ICOT.2017.8336076","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336076","url":null,"abstract":"In this paper, we use the voice samples recorded from laryngectomised patients to develop a novel method for speech enhancement and regeneration of natural sounding speech for laryngectomees. By leveraging recent advances in computational methods for speech reconstruction, our proposed method takes advantages of both non-training and training-based approaches to improve the quality of reconstructed speech for voice-impaired individuals. Since the proposed method has been developed based on the samples obtained from post-laryngectomised patients (and not based on the characteristics of other alternative modes of speech such as whispers and pseudo-whispers), it can address the limitations of current computational methods to some extent. Furthermore, by focusing on English vowels, objective evaluations are carried out to show the efficiency of the proposed method.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130592288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Snoring and apnea detection based on hybrid neural networks 基于混合神经网络的打鼾和呼吸暂停检测
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336088
Bingbing Kang, Xin Dang, Ran Wei
Snoring sound is an essential signal of obstructive sleep apnea (OSA). In order to detect snoring and apnea events in sleep audio recordings, a novel hybrid neural networks based snoring detection methods are evaluated in this study. The proposed method using linear predict coding (LPC) and Mel-Frequency Cepstral Coefficients (MFCC) features. The dataset included full-night audio recordings from 24 individuals who acknowledged having snoring habits with the label of polysomnography result. This method was demonstrated experimentally to be effective for snoring and apnea event detection. The performance of the proposed method was evaluated by classifying different events (snoring, Apnea and silence) from the sleep sound recordings and comparing the classification against ground truth. The proposed algorithm was able to achieve an accuracy of 90.65% for detecting snoring events, 90.99% for Apnea, and 90.30% for silence.
鼾声是阻塞性睡眠呼吸暂停(OSA)的重要信号。为了检测睡眠录音中的打鼾和呼吸暂停事件,本研究评估了一种新的基于混合神经网络的打鼾检测方法。该方法利用线性预测编码(LPC)和mel -频率倒谱系数(MFCC)特征。数据集包括24名承认有打鼾习惯的人的整晚录音,并贴上多导睡眠图结果的标签。实验证明该方法对打鼾和呼吸暂停事件检测是有效的。通过对睡眠录音中的不同事件(打鼾、呼吸暂停和沉默)进行分类,并将分类结果与实际情况进行比较,评估了所提出方法的性能。该算法检测打鼾事件的准确率为90.65%,检测呼吸暂停事件的准确率为90.99%,检测沉默事件的准确率为90.30%。
{"title":"Snoring and apnea detection based on hybrid neural networks","authors":"Bingbing Kang, Xin Dang, Ran Wei","doi":"10.1109/ICOT.2017.8336088","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336088","url":null,"abstract":"Snoring sound is an essential signal of obstructive sleep apnea (OSA). In order to detect snoring and apnea events in sleep audio recordings, a novel hybrid neural networks based snoring detection methods are evaluated in this study. The proposed method using linear predict coding (LPC) and Mel-Frequency Cepstral Coefficients (MFCC) features. The dataset included full-night audio recordings from 24 individuals who acknowledged having snoring habits with the label of polysomnography result. This method was demonstrated experimentally to be effective for snoring and apnea event detection. The performance of the proposed method was evaluated by classifying different events (snoring, Apnea and silence) from the sleep sound recordings and comparing the classification against ground truth. The proposed algorithm was able to achieve an accuracy of 90.65% for detecting snoring events, 90.99% for Apnea, and 90.30% for silence.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133431678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An integrated vision system on emotion understanding and identity confirmation 基于情感理解和身份确认的集成视觉系统
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336123
Yang-Yen Ou, A. Tsai, Jhing-Fa Wang, Po-Chien Lin
The technologies of computer vision provided the home robot with visual ability, is able to improve the friendly user experience in home robot application. The vision research tends to solve single problem for a particular area, such as emotion understanding, identity identification and object detection, etc. This paper proposed an integrated application of vision system for emotion and identity confirmation. The facial landmarks, RGB image, and skeleton information captured by Kinect are the input of integrated vision system. The facial landmarks are used for Action Unit (AU) detection since the emotion recognition is considered as a combination of action units. The RGB image and skeleton information are used for identity conformation. The main contributions are summarized as follows. 1) An integrated vision system is proposed for the user description. 2) Hierarchal-Architecture SVM is presented for analysis of facial action units. 3) The system uses facial image and skeleton information to enhance the ability of identity confirmation. Experiments are performed on online test with the average accuracy are obtained by 86.33% and 86.26%, respectively. The experimental results have demonstrated the effectiveness and efficiency of the proposed system in real time application.
计算机视觉技术为家庭机器人提供了视觉能力,能够提高家庭机器人应用中的友好用户体验。视觉研究趋向于解决某一特定领域的单一问题,如情感理解、身份识别、目标检测等。本文提出了一种视觉系统在情感和身份确认中的集成应用。Kinect捕捉到的面部地标、RGB图像和骨骼信息是集成视觉系统的输入。面部标志用于动作单元检测,因为情感识别被认为是动作单元的组合。使用RGB图像和骨架信息进行身份识别。主要贡献总结如下。1)提出了一种用于用户描述的集成视觉系统。2)采用层次结构支持向量机对面部动作单元进行分析。3)系统利用人脸图像和骨骼信息增强身份确认能力。在在线测试中进行了实验,平均准确率分别为86.33%和86.26%。实验结果证明了该系统在实时应用中的有效性和高效性。
{"title":"An integrated vision system on emotion understanding and identity confirmation","authors":"Yang-Yen Ou, A. Tsai, Jhing-Fa Wang, Po-Chien Lin","doi":"10.1109/ICOT.2017.8336123","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336123","url":null,"abstract":"The technologies of computer vision provided the home robot with visual ability, is able to improve the friendly user experience in home robot application. The vision research tends to solve single problem for a particular area, such as emotion understanding, identity identification and object detection, etc. This paper proposed an integrated application of vision system for emotion and identity confirmation. The facial landmarks, RGB image, and skeleton information captured by Kinect are the input of integrated vision system. The facial landmarks are used for Action Unit (AU) detection since the emotion recognition is considered as a combination of action units. The RGB image and skeleton information are used for identity conformation. The main contributions are summarized as follows. 1) An integrated vision system is proposed for the user description. 2) Hierarchal-Architecture SVM is presented for analysis of facial action units. 3) The system uses facial image and skeleton information to enhance the ability of identity confirmation. Experiments are performed on online test with the average accuracy are obtained by 86.33% and 86.26%, respectively. The experimental results have demonstrated the effectiveness and efficiency of the proposed system in real time application.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129113444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A chatbot using LSTM-based multi-layer embedding for elderly care 一种基于lstm多层嵌入的养老聊天机器人
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336091
Ming-Hsiang Su, Chung-Hsien Wu, Kun-Yi Huang, Qian-Bei Hong, H. Wang
According to demographic changes, the services designed for the elderly are becoming more needed than before and increasingly important. In previous work, social media or community-based question-answer data were generally used to build the chatbot. In this study, we collected the MHMC chitchat dataset from daily conversations with the elderly. Since people are free to say anything to the system, the collected sentences are converted into patterns in the preprocessing part to cover the variability of conversational sentences. Then, an LSTM-based multi-layer embedding model is used to extract the semantic information between words and sentences in a single turn with multiple sentences when chatting with the elderly. Finally, the Euclidean distance is employed to select a proper question pattern, which is further used to select the corresponding answer to respond to the elderly. For performance evaluation, five-fold cross-validation scheme was employed for training and evaluation. Experimental results show that the proposed method achieved an accuracy of 79.96% for top-1 response selection, which outperformed the traditional Okapi model.
根据人口结构的变化,为老年人设计的服务比以前更有需要,也越来越重要。在之前的工作中,通常使用社交媒体或基于社区的问答数据来构建聊天机器人。在这项研究中,我们从与老年人的日常对话中收集了MHMC聊天数据集。由于人们可以自由地对系统说任何话,因此在预处理部分将收集到的句子转换为模式,以覆盖会话句子的可变性。然后,利用基于lstm的多层嵌入模型,提取与老年人聊天时单回合多句词与句子之间的语义信息;最后,利用欧几里得距离选择合适的问题模式,进而选择相应的答案来回应老年人。在绩效评估方面,采用五重交叉验证方案进行培训和评估。实验结果表明,该方法对top-1响应的选择准确率达到79.96%,优于传统的Okapi模型。
{"title":"A chatbot using LSTM-based multi-layer embedding for elderly care","authors":"Ming-Hsiang Su, Chung-Hsien Wu, Kun-Yi Huang, Qian-Bei Hong, H. Wang","doi":"10.1109/ICOT.2017.8336091","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336091","url":null,"abstract":"According to demographic changes, the services designed for the elderly are becoming more needed than before and increasingly important. In previous work, social media or community-based question-answer data were generally used to build the chatbot. In this study, we collected the MHMC chitchat dataset from daily conversations with the elderly. Since people are free to say anything to the system, the collected sentences are converted into patterns in the preprocessing part to cover the variability of conversational sentences. Then, an LSTM-based multi-layer embedding model is used to extract the semantic information between words and sentences in a single turn with multiple sentences when chatting with the elderly. Finally, the Euclidean distance is employed to select a proper question pattern, which is further used to select the corresponding answer to respond to the elderly. For performance evaluation, five-fold cross-validation scheme was employed for training and evaluation. Experimental results show that the proposed method achieved an accuracy of 79.96% for top-1 response selection, which outperformed the traditional Okapi model.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117204563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Pressure sensor based on patterned laser scribed reduced graphene oxide; experiment & modeling 基于图案化激光刻录还原氧化石墨烯的压力传感器实验与建模
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336077
Zahra Hosseindokht, Mohsen Paryavi, E. Asadian, R. Mohammadpour, H. Rafii-Tabar, P. Sasanpour
A low cost nanostructure pressure sensor has been designed, fabricated and tested based on graphene structure. The sensor structure is composed of the period structure of graphene oxide/reduced graphene oxide. The structure is fabricated by laser scanning system, where a beam of laser will convert the graphene oxide substrate to reduced graphene oxide. The geometrical structure of the sensor has been designed and optimized using computational analysis with Finite Element Method. The results of measurements show that the sensor structure is capable of measuring 1.45 kPa. The proposed sensor structure can be exploited in the application of electronic skin, synthetic tissue and robotic structures accordingly.
设计、制作并测试了一种基于石墨烯结构的低成本纳米结构压力传感器。传感器结构由氧化石墨烯/还原氧化石墨烯的周期结构组成。该结构是通过激光扫描系统制造的,其中一束激光将氧化石墨烯衬底转化为还原氧化石墨烯。采用有限元法对传感器的几何结构进行了设计和优化。测量结果表明,该传感器结构能够测量1.45 kPa的压力。该传感器结构可应用于电子皮肤、合成组织和机器人结构。
{"title":"Pressure sensor based on patterned laser scribed reduced graphene oxide; experiment & modeling","authors":"Zahra Hosseindokht, Mohsen Paryavi, E. Asadian, R. Mohammadpour, H. Rafii-Tabar, P. Sasanpour","doi":"10.1109/ICOT.2017.8336077","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336077","url":null,"abstract":"A low cost nanostructure pressure sensor has been designed, fabricated and tested based on graphene structure. The sensor structure is composed of the period structure of graphene oxide/reduced graphene oxide. The structure is fabricated by laser scanning system, where a beam of laser will convert the graphene oxide substrate to reduced graphene oxide. The geometrical structure of the sensor has been designed and optimized using computational analysis with Finite Element Method. The results of measurements show that the sensor structure is capable of measuring 1.45 kPa. The proposed sensor structure can be exploited in the application of electronic skin, synthetic tissue and robotic structures accordingly.","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127753668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of learning performance by quantifying user's engagement investigation through low-cost multi-modal sensors 通过低成本多模态传感器量化用户参与调查来评估学习绩效
Pub Date : 2017-12-01 DOI: 10.1109/ICOT.2017.8336117
Vedant Sandhu, Aung Aung Phyo Wai, C. Y. Ho
Although new forms of learning methods emerge embracing digital technologies, there is still no solution to objectively assess student's engagement, something pertinent to learning performance. Besides the traditional class questionnaire and exam, measuring attention or engagement using sensors, in real-time, is quickly growing interest. This paper investigates how multimodal sensors attributes to quantify engagement levels through a set of learning experiments. We conducted experiments with 10 high school students who participated in different activities that lasted about one hour, comprising of a 2-phase experiment. Phase 1 involved collecting training data for the classifier. While phase 2 required participants to complete two reading comprehension tests with passages they liked and disliked, simulating an e-Learning experience. We use commercial low-cost sensors such as EEG headband, desktop eye tracker, PPG and GSR sensors to collect multimodal data. Different features from different sensors were extracted and labelled using our experiment design and tasks measuring reaction time. Accuracies upwards of 90% were achieved while classifying the EEG data into 3-class engagement levels. We, thus, suggest leveraging multimodal sensors to quantify multi-dimensional indexes such as engagement, emotion etc., for real-time assessment of learning performance. We are hoping that our work paves ways for assessing learn performance in a multi-faceted criteria, encompassing different neural, physiological and psychological states
尽管采用数字技术的新形式的学习方法出现了,但仍然没有办法客观地评估学生的参与度,这与学习表现有关。除了传统的课堂问卷和考试之外,利用传感器实时测量注意力或参与度也正迅速引起人们的兴趣。本文研究了多模态传感器如何通过一组学习实验来量化参与水平。我们对10名高中生进行了实验,他们参加了不同的活动,持续了大约一个小时,包括两个阶段的实验。阶段1包括为分类器收集训练数据。第二阶段要求参与者完成两个阅读理解测试,测试内容包括他们喜欢和不喜欢的文章,模拟电子学习体验。我们使用商用低成本传感器,如EEG头带,桌面眼动仪,PPG和GSR传感器来收集多模态数据。利用我们的实验设计和测量反应时间的任务,从不同的传感器中提取和标记不同的特征。将脑电数据分为3类参与水平,准确率达到90%以上。因此,我们建议利用多模态传感器来量化多维指标,如参与度、情绪等,以实时评估学习表现。我们希望我们的工作能为从多方面的标准来评估学习表现铺平道路,包括不同的神经、生理和心理状态
{"title":"Evaluation of learning performance by quantifying user's engagement investigation through low-cost multi-modal sensors","authors":"Vedant Sandhu, Aung Aung Phyo Wai, C. Y. Ho","doi":"10.1109/ICOT.2017.8336117","DOIUrl":"https://doi.org/10.1109/ICOT.2017.8336117","url":null,"abstract":"Although new forms of learning methods emerge embracing digital technologies, there is still no solution to objectively assess student's engagement, something pertinent to learning performance. Besides the traditional class questionnaire and exam, measuring attention or engagement using sensors, in real-time, is quickly growing interest. This paper investigates how multimodal sensors attributes to quantify engagement levels through a set of learning experiments. We conducted experiments with 10 high school students who participated in different activities that lasted about one hour, comprising of a 2-phase experiment. Phase 1 involved collecting training data for the classifier. While phase 2 required participants to complete two reading comprehension tests with passages they liked and disliked, simulating an e-Learning experience. We use commercial low-cost sensors such as EEG headband, desktop eye tracker, PPG and GSR sensors to collect multimodal data. Different features from different sensors were extracted and labelled using our experiment design and tasks measuring reaction time. Accuracies upwards of 90% were achieved while classifying the EEG data into 3-class engagement levels. We, thus, suggest leveraging multimodal sensors to quantify multi-dimensional indexes such as engagement, emotion etc., for real-time assessment of learning performance. We are hoping that our work paves ways for assessing learn performance in a multi-faceted criteria, encompassing different neural, physiological and psychological states","PeriodicalId":297245,"journal":{"name":"2017 International Conference on Orange Technologies (ICOT)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125600185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2017 International Conference on Orange Technologies (ICOT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1