Xinhao Wang, Keelan Evanini, James V. Bruno, Matthew David Mulholland
{"title":"Automatic plagiarism detection for spoken responses in an assessment of English language proficiency","authors":"Xinhao Wang, Keelan Evanini, James V. Bruno, Matthew David Mulholland","doi":"10.1109/SLT.2016.7846254","DOIUrl":null,"url":null,"abstract":"This paper addresses the task of automatically detecting plagiarized responses in the context of a test of spoken English proficiency for non-native speakers. Text-to-text content similarity features are used jointly with speaking proficiency features extracted using an automated speech scoring system to train classifiers to distinguish between plagiarized and non-plagiarized spoken responses. A large data set drawn from an operational English proficiency assessment is used to simulate the performance of the detection system in a practical application. The best classifier on this heavily imbalanced data set resulted in an F1-score of 0.706 on the plagiarized class. These results indicate that the proposed system can potentially be used to improve the validity of both human and automated assessment of non-native spoken English.","PeriodicalId":281635,"journal":{"name":"2016 IEEE Spoken Language Technology Workshop (SLT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Spoken Language Technology Workshop (SLT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SLT.2016.7846254","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
This paper addresses the task of automatically detecting plagiarized responses in the context of a test of spoken English proficiency for non-native speakers. Text-to-text content similarity features are used jointly with speaking proficiency features extracted using an automated speech scoring system to train classifiers to distinguish between plagiarized and non-plagiarized spoken responses. A large data set drawn from an operational English proficiency assessment is used to simulate the performance of the detection system in a practical application. The best classifier on this heavily imbalanced data set resulted in an F1-score of 0.706 on the plagiarized class. These results indicate that the proposed system can potentially be used to improve the validity of both human and automated assessment of non-native spoken English.