Yao Qian, Rutuja Ubale, Matthew David Mulholland, Keelan Evanini, Xinhao Wang
{"title":"A Prompt-Aware Neural Network Approach to Content-Based Scoring of Non-Native Spontaneous Speech","authors":"Yao Qian, Rutuja Ubale, Matthew David Mulholland, Keelan Evanini, Xinhao Wang","doi":"10.1109/SLT.2018.8639697","DOIUrl":null,"url":null,"abstract":"We present a neural network approach to the automated assessment of non-native spontaneous speech in a listen and speak task. An attention-based Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) is used to learn the relations (scoring rubrics) between the spoken responses and their assigned scores. Each prompt (listening material) is encoded as a vector in a low-dimensional space and then employed as a condition of the inputs of the attention LSTM-RNN. The experimental results show that our approach performs as well as the strong baseline of a Support Vector Regressor (SVR) using content-related features, i.e., a correlation of r = 0.806 with holistic proficiency scores provided by humans, without doing any feature engineering. The prompt-encoded vector improves the discrimination between the high-scoring sample and low-scoring sample, and it is more effective in grading responses to unseen prompts, which have no corresponding responses in the training set.","PeriodicalId":377307,"journal":{"name":"2018 IEEE Spoken Language Technology Workshop (SLT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT.2018.8639697","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
We present a neural network approach to the automated assessment of non-native spontaneous speech in a listen and speak task. An attention-based Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) is used to learn the relations (scoring rubrics) between the spoken responses and their assigned scores. Each prompt (listening material) is encoded as a vector in a low-dimensional space and then employed as a condition of the inputs of the attention LSTM-RNN. The experimental results show that our approach performs as well as the strong baseline of a Support Vector Regressor (SVR) using content-related features, i.e., a correlation of r = 0.806 with holistic proficiency scores provided by humans, without doing any feature engineering. The prompt-encoded vector improves the discrimination between the high-scoring sample and low-scoring sample, and it is more effective in grading responses to unseen prompts, which have no corresponding responses in the training set.