{"title":"探索非母语人士作文自动评分的有效方法","authors":"Kornwipa Poonpon, Paiboon Manorom, Wirapong Chansanam","doi":"10.30935/cedtech/13740","DOIUrl":null,"url":null,"abstract":"Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers’ writing skills. This research addresses this gap by exploring the effectiveness of automated essay-scoring methods specifically designed for non-native speakers. The study acknowledges the unique challenges posed by variations in language proficiency, cultural differences, and linguistic complexities when assessing non-native speakers’ writing abilities. This work focuses on the automated student assessment prize and Khon Kaen University academic English language test dataset and presents an approach that leverages variants of the long short-term memory network model to learn features and compare results with the Kappa coefficient. The findings demonstrate that the proposed framework and approach, which involve joint learning of different essay representations, yield significant benefits and achieve results comparable to state-of-the-art deep learning models. These results suggest that the novel text representation proposed in this paper holds promise as a new and effective choice for assessing the writing tasks of non-native speakers. The result of this study can apply to advance educational assessment practices and promote equitable opportunities for language learners worldwide by enhancing the evaluation process for non-native speakers","PeriodicalId":37088,"journal":{"name":"Contemporary Educational Technology","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring effective methods for automated essay scoring of non-native speakers\",\"authors\":\"Kornwipa Poonpon, Paiboon Manorom, Wirapong Chansanam\",\"doi\":\"10.30935/cedtech/13740\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers’ writing skills. This research addresses this gap by exploring the effectiveness of automated essay-scoring methods specifically designed for non-native speakers. The study acknowledges the unique challenges posed by variations in language proficiency, cultural differences, and linguistic complexities when assessing non-native speakers’ writing abilities. This work focuses on the automated student assessment prize and Khon Kaen University academic English language test dataset and presents an approach that leverages variants of the long short-term memory network model to learn features and compare results with the Kappa coefficient. The findings demonstrate that the proposed framework and approach, which involve joint learning of different essay representations, yield significant benefits and achieve results comparable to state-of-the-art deep learning models. These results suggest that the novel text representation proposed in this paper holds promise as a new and effective choice for assessing the writing tasks of non-native speakers. The result of this study can apply to advance educational assessment practices and promote equitable opportunities for language learners worldwide by enhancing the evaluation process for non-native speakers\",\"PeriodicalId\":37088,\"journal\":{\"name\":\"Contemporary Educational Technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Contemporary Educational Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.30935/cedtech/13740\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Contemporary Educational Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30935/cedtech/13740","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Exploring effective methods for automated essay scoring of non-native speakers
Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers’ writing skills. This research addresses this gap by exploring the effectiveness of automated essay-scoring methods specifically designed for non-native speakers. The study acknowledges the unique challenges posed by variations in language proficiency, cultural differences, and linguistic complexities when assessing non-native speakers’ writing abilities. This work focuses on the automated student assessment prize and Khon Kaen University academic English language test dataset and presents an approach that leverages variants of the long short-term memory network model to learn features and compare results with the Kappa coefficient. The findings demonstrate that the proposed framework and approach, which involve joint learning of different essay representations, yield significant benefits and achieve results comparable to state-of-the-art deep learning models. These results suggest that the novel text representation proposed in this paper holds promise as a new and effective choice for assessing the writing tasks of non-native speakers. The result of this study can apply to advance educational assessment practices and promote equitable opportunities for language learners worldwide by enhancing the evaluation process for non-native speakers