Pub Date : 2021-06-20DOI: 10.1080/15434303.2021.1937174
Jirada Wudthayagorn
ABSTRACT One of the many education reforms in Thailand is a policy aimed at improving English language proficiency among university students. One direction in this policy requires that each university administer a standardized English language test to their students before they graduate, and that the students’ scores should be aligned to the Common European Framework of Reference for Languages (CEFR) or other standards. This research study examined the English exit examination systems in Thailand by analyzing how all 81 public universities have implemented this policy. Secondary sources of data, including official documents, government statistical reports, and related research studies, were collected, and semi-structured interviews via phone were also conducted. Summative content analysis was used to analyze the data. The results showed six approaches for creation and selection of tests for policy implementation. Among these universities, different benchmarks were established. These approaches and benchmarks were developed appropriate to their management readiness and academic contexts. Also, although the policy allows for different standards to be followed, the CEFR is the only one being used. In summary, English exit examinations policy and practice might raise awareness of the importance of English and motivate students to improve their ability, but it is not likely to guarantee expected English proficiency levels.
{"title":"An Exploration of the English Exit Examination Policy in Thai Public Universities","authors":"Jirada Wudthayagorn","doi":"10.1080/15434303.2021.1937174","DOIUrl":"https://doi.org/10.1080/15434303.2021.1937174","url":null,"abstract":"ABSTRACT One of the many education reforms in Thailand is a policy aimed at improving English language proficiency among university students. One direction in this policy requires that each university administer a standardized English language test to their students before they graduate, and that the students’ scores should be aligned to the Common European Framework of Reference for Languages (CEFR) or other standards. This research study examined the English exit examination systems in Thailand by analyzing how all 81 public universities have implemented this policy. Secondary sources of data, including official documents, government statistical reports, and related research studies, were collected, and semi-structured interviews via phone were also conducted. Summative content analysis was used to analyze the data. The results showed six approaches for creation and selection of tests for policy implementation. Among these universities, different benchmarks were established. These approaches and benchmarks were developed appropriate to their management readiness and academic contexts. Also, although the policy allows for different standards to be followed, the CEFR is the only one being used. In summary, English exit examinations policy and practice might raise awareness of the importance of English and motivate students to improve their ability, but it is not likely to guarantee expected English proficiency levels.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1937174","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41413581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-18DOI: 10.1080/15434303.2021.1931231
J. Rüsseler, Denise Arendt, T. Münte, B. Mohammadi, M. Boltzmann
ABSTRACT Testing language abilities is common in the context of migration. It has been observed that approximately 6.2 million adults in Germany are low literate and that approximately 47% of this group does not have German as their L1. Similar findings have been reported for other modern Western societies with compulsory schooling and a well-developed educational system. However, in most testing situations no exemptions are granted for low literate learners and the same tests are used irrespective of the level of reading proficiency. In this review we focus on brain imaging research showing that reading-related neural networks in the brain differ between (L1) low literate adults and adults with normal reading abilities. We argue that these differences in functional brain anatomy influence cognition in general and should form the basis for changes in the practice of granting exemptions in language testing involving low literate adults. Possible consequences for language assessment for different purposes are discussed. Furthermore, the reported influence of literacy on functional brain organization should be considered for decisions in the context of granting exemptions for low literates in language assessment. Keywords: review; low literates; fMRI; resting state; brain structure; language assessment.
{"title":"Literacy Affects Brain Structure – What Can We Learn for Language Assessment in Low Literates?","authors":"J. Rüsseler, Denise Arendt, T. Münte, B. Mohammadi, M. Boltzmann","doi":"10.1080/15434303.2021.1931231","DOIUrl":"https://doi.org/10.1080/15434303.2021.1931231","url":null,"abstract":"ABSTRACT Testing language abilities is common in the context of migration. It has been observed that approximately 6.2 million adults in Germany are low literate and that approximately 47% of this group does not have German as their L1. Similar findings have been reported for other modern Western societies with compulsory schooling and a well-developed educational system. However, in most testing situations no exemptions are granted for low literate learners and the same tests are used irrespective of the level of reading proficiency. In this review we focus on brain imaging research showing that reading-related neural networks in the brain differ between (L1) low literate adults and adults with normal reading abilities. We argue that these differences in functional brain anatomy influence cognition in general and should form the basis for changes in the practice of granting exemptions in language testing involving low literate adults. Possible consequences for language assessment for different purposes are discussed. Furthermore, the reported influence of literacy on functional brain organization should be considered for decisions in the context of granting exemptions for low literates in language assessment. Keywords: review; low literates; fMRI; resting state; brain structure; language assessment.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1931231","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42239412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-16DOI: 10.1080/15434303.2021.1931232
Keisuke Kubota, Y. Yokouchi, Rie Koizumi
ABSTRACT This is an interview article with Randolph H. Thrasher and Yoshinori Watanabe, leading experts on language assessment in Japan. Randolph H. Thrasher is professor emeritus at Okinawa Christian University, Japan, and International Christian University, Japan. He has made numerous significant contributions to spreading and supporting the field of language assessment. Yoshinori Watanabe is a professor at Sophia University, Graduate School of Language and Linguistics, Japan. He is also a renowned figure in language assessment and testing domestically and internationally. We interviewed them to deepen insights into various aspects of language assessment in Japan, as well as in the world. This interview was conducted in March 2021 via e-mail because of the global COVID-19 pandemic. In answering our questions for this interview, Professor Thrasher and Watanabe wrote about their own early lives and careers and their fruitful experiences with Japan Language Testing Association (JLTA), shared their passionate beliefs about language assessment and testing from domestic and international perspectives, and offered hopeful messages to young researchers. 本稿は, 日本における言語テスティング分野を開拓し, 国際的にも活躍するRandolph H. Thrasher名誉教授 (沖縄キリスト教学院大学 • 国際基督教大学) と渡部良典教授 (上智大学大学院) へのインタビュー記事である。Thrasher名誉教授は第2代の, 渡部教授は現在, 第4代の日本言語テスト学会 (JLTA) 会長として学会を牽引している。両名は, 言語テスティング分野の学術的発展への貢献のみにとどまらず, 得られた研究成果を社会に還元し, 国際社会の外国語教育発展に貢献してきた, まさに日本を代表する言語テスティング研究者と言える。JLTA発足時からのメンバーである両名は, その中心として, 国際言語テスト学会 (ILTA) における活動をはじめ, 国際的にも長年活躍されている。その両名から, これまでのキャリア, JLTAでの貴重な経験, 国内および国際的な観点からの言語テスティングへの知見, 言語テスティングに対する情熱と信念, そしてこれからの言語テスティングの分野を担う若手研究者に対するメッセージをいただいた。
ABSTRACT This is an interview article with Randolph H.Thrasher and Yoshinori Watanabe,leading experts on language assessment in Japan。Randolph H.Thrasher is professor emeritus at Okinawa Christian University,Japan,and International Christian University,Japan。He has made numerous significant contributions to spreading and supporting the field of language assessment。Yoshinori Watanabe is a professor at Sophia University,Graduate School of Language and Linguistics,Japan。He is also a renowned figure in language assessment and testing domestically and internationally。We interviewed them to deepen insights into various aspects of language assessment in Japan,as well as in the world。This interview was conducted in March 2021via e-mail because of the global COVID-19pandemic。本站提供的服务包括:主动式、主动式和侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式、侧倾式archers。本文是对开拓日本语言测试领域,在国际上也活跃的Randolph H.Thrasher名誉教授(冲绳基督教学院大学•国际基督教大学)和渡部良典教授(上智大学研究生院)的采访报道。Thrasher名誉教授是第2代,渡部教授现在作为第4代的日本语言测试学会(JLTA)会长领导着学会。两人不仅对语言测试领域的学术发展做出了贡献,还将所取得的研究成果还原给社会,为国际社会的外语教育发展做出了贡献,可以说是代表日本的语言测试研究者。作为JLTA成立时的两名成员,作为其中心,以国际语言测试学会(ILTA)的活动为首,在国际上也活跃了多年。从这两人中,我们收到了对至今为止的经历、在JLTA的宝贵经验、从国内及国际的观点对语言测试的见解、对语言测试的热情和信念、以及今后担负语言测试领域的年轻研究者的信息。
{"title":"“Assessment Research for the Benefit of Humanity”: An Interview with Randy Thrasher and Yoshinori Watanabe","authors":"Keisuke Kubota, Y. Yokouchi, Rie Koizumi","doi":"10.1080/15434303.2021.1931232","DOIUrl":"https://doi.org/10.1080/15434303.2021.1931232","url":null,"abstract":"ABSTRACT This is an interview article with Randolph H. Thrasher and Yoshinori Watanabe, leading experts on language assessment in Japan. Randolph H. Thrasher is professor emeritus at Okinawa Christian University, Japan, and International Christian University, Japan. He has made numerous significant contributions to spreading and supporting the field of language assessment. Yoshinori Watanabe is a professor at Sophia University, Graduate School of Language and Linguistics, Japan. He is also a renowned figure in language assessment and testing domestically and internationally. We interviewed them to deepen insights into various aspects of language assessment in Japan, as well as in the world. This interview was conducted in March 2021 via e-mail because of the global COVID-19 pandemic. In answering our questions for this interview, Professor Thrasher and Watanabe wrote about their own early lives and careers and their fruitful experiences with Japan Language Testing Association (JLTA), shared their passionate beliefs about language assessment and testing from domestic and international perspectives, and offered hopeful messages to young researchers. 本稿は, 日本における言語テスティング分野を開拓し, 国際的にも活躍するRandolph H. Thrasher名誉教授 (沖縄キリスト教学院大学 • 国際基督教大学) と渡部良典教授 (上智大学大学院) へのインタビュー記事である。Thrasher名誉教授は第2代の, 渡部教授は現在, 第4代の日本言語テスト学会 (JLTA) 会長として学会を牽引している。両名は, 言語テスティング分野の学術的発展への貢献のみにとどまらず, 得られた研究成果を社会に還元し, 国際社会の外国語教育発展に貢献してきた, まさに日本を代表する言語テスティング研究者と言える。JLTA発足時からのメンバーである両名は, その中心として, 国際言語テスト学会 (ILTA) における活動をはじめ, 国際的にも長年活躍されている。その両名から, これまでのキャリア, JLTAでの貴重な経験, 国内および国際的な観点からの言語テスティングへの知見, 言語テスティングに対する情熱と信念, そしてこれからの言語テスティングの分野を担う若手研究者に対するメッセージをいただいた。","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1931232","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49364277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACT Many migrants have had few opportunities to develop functional literacy skills, even in their L1. Even so, migration and integration policies in Western host societies often assume literacy skills and fail to consider accommodations for low-literate migrants. Valid, reliable instruments to identify low-literate migrants and policy-oriented research into the assessment of low-literates are scarce. This study makes a contribution to research and practice in this area. It reports on the development of a valid multilingual assessment tool to efficiently shed light on adult migrants’ literacy skills. This tool was trialed on a representative sample of 351 asylum seekers in Belgian asylum centers. The performance data were analyzed using Rasch Measurement, and DIF analysis. First, the results confirmed the tool allowed for the identification of four literacy levels. Second, frequency distribution of the Rasch person measurements showed that a substantial proportion of asylum seekers do not possess the literacy skills to participate fully in their host society upon arrival. The findings underline the importance of (a) taking literacy skills into account in both integration policy and assessment; (b) viewing literacy as a broad functional spectrum while considering the influence of contextual factors such as language of assessment and processing time.
{"title":"Developing and Validating a Multilingual Literacy Test for Asylum Seekers","authors":"Hannelore Hooft, Mariet Schiepers, Goedele Vandommele","doi":"10.1080/15434303.2021.1931230","DOIUrl":"https://doi.org/10.1080/15434303.2021.1931230","url":null,"abstract":"ABSTRACT Many migrants have had few opportunities to develop functional literacy skills, even in their L1. Even so, migration and integration policies in Western host societies often assume literacy skills and fail to consider accommodations for low-literate migrants. Valid, reliable instruments to identify low-literate migrants and policy-oriented research into the assessment of low-literates are scarce. This study makes a contribution to research and practice in this area. It reports on the development of a valid multilingual assessment tool to efficiently shed light on adult migrants’ literacy skills. This tool was trialed on a representative sample of 351 asylum seekers in Belgian asylum centers. The performance data were analyzed using Rasch Measurement, and DIF analysis. First, the results confirmed the tool allowed for the identification of four literacy levels. Second, frequency distribution of the Rasch person measurements showed that a substantial proportion of asylum seekers do not possess the literacy skills to participate fully in their host society upon arrival. The findings underline the importance of (a) taking literacy skills into account in both integration policy and assessment; (b) viewing literacy as a broad functional spectrum while considering the influence of contextual factors such as language of assessment and processing time.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1931230","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47357707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-25DOI: 10.1080/15434303.2021.1922412
Sondoss Elnegahy, H. Jin, Haeun Kim
While the discussion of fairness and justice in language testing has gained momentum over the past two decades (e.g. Davies, 2004; Kane, 2010; Kunnan, 2000, 2004, 2010; McNamara & Ryan, 2011; Shoha...
{"title":"Fairness, Justice, and Language Assessment","authors":"Sondoss Elnegahy, H. Jin, Haeun Kim","doi":"10.1080/15434303.2021.1922412","DOIUrl":"https://doi.org/10.1080/15434303.2021.1922412","url":null,"abstract":"While the discussion of fairness and justice in language testing has gained momentum over the past two decades (e.g. Davies, 2004; Kane, 2010; Kunnan, 2000, 2004, 2010; McNamara & Ryan, 2011; Shoha...","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1922412","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44738469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-10DOI: 10.1080/15434303.2021.1919116
Ray J. T. Liao
ABSTRACT Jessica Wu is Research and Development Program Director at the Language Training and Testing Center (LTTC) in Taipei, Taiwan. Jessica has been deeply involved in the development and validation of the General English Proficiency Test (GEPT) in addition to a number of other foreign language testing programs. Her research focuses on speaking assessment and the impact of standardized language testing. Jessica is a founding member of the Asian Association for Language Assessment (AALA), for which she has served as both co-president (2016 to 2017) and president (2018 to 2019). She currently serves as an advisor for the development and administration of L1 tests (i.e., Hakka, Southern Min, and Indigenous language proficiency certifications) in Taiwan. Jessica has authored a number of publications, including articles and book chapters in the field of language testing, and has presented her work at conferences around the world. Most recently, she co-edited English Language Proficiency Testing in Asia: A New Paradigm Bridging Global and Local Contexts with Cyril Weir and I-Wen Su. Her contributions to language testing are significant and immeasurable, particularly with regard to Asia. The following interview with Jessica, which was conducted via an online meeting platform due to the COVID-19 pandemic, took place in January 2021.
{"title":"Advancing the International Recognition of the Locally-Produced GEPT: An Interview with Jessica Wu","authors":"Ray J. T. Liao","doi":"10.1080/15434303.2021.1919116","DOIUrl":"https://doi.org/10.1080/15434303.2021.1919116","url":null,"abstract":"ABSTRACT Jessica Wu is Research and Development Program Director at the Language Training and Testing Center (LTTC) in Taipei, Taiwan. Jessica has been deeply involved in the development and validation of the General English Proficiency Test (GEPT) in addition to a number of other foreign language testing programs. Her research focuses on speaking assessment and the impact of standardized language testing. Jessica is a founding member of the Asian Association for Language Assessment (AALA), for which she has served as both co-president (2016 to 2017) and president (2018 to 2019). She currently serves as an advisor for the development and administration of L1 tests (i.e., Hakka, Southern Min, and Indigenous language proficiency certifications) in Taiwan. Jessica has authored a number of publications, including articles and book chapters in the field of language testing, and has presented her work at conferences around the world. Most recently, she co-edited English Language Proficiency Testing in Asia: A New Paradigm Bridging Global and Local Contexts with Cyril Weir and I-Wen Su. Her contributions to language testing are significant and immeasurable, particularly with regard to Asia. The following interview with Jessica, which was conducted via an online meeting platform due to the COVID-19 pandemic, took place in January 2021.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1919116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46673095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-25DOI: 10.1080/15434303.2021.1908295
Senyung Lee, Sun-young Shin
ABSTRACT Multiple test tasks are available for assessing L2 collocation knowledge. However, few studies have investigated the characteristics of a variety of recognition and recall tasks of collocation simultaneously, and most research on L2 collocations has focused on verb-noun and adjective-noun collocations. This study investigates (1) the relative informativeness of different tasks for assessing L2 collocation knowledge and (2) the effect of collocation type on learners’ scores on collocation tasks. Four tasks were developed based on an extensive review of research on L2 collocations: a sentence writing task, fill-in-the-blank task, multiple-choice task, and Yes/No acceptability judgment task. Each task targeted 64 English collocations, including verb-noun, adjective-noun, adverb-adjective, and adverb-verb collocations. Four groups of adult ESL learners representing different levels of academic English literacy (n = 205) completed the tasks. An item response theory analysis showed that the sentence writing and fill-in-the-blank-tasks had similar difficulty and discriminating power, the eight-option multiple-choice task had the highest discriminating power, and the Yes/No judgment task had the lowest difficulty and discriminating power. The type of collocation did not have a significant effect on learners’ scores when collocation frequency was held constant, regardless of task and learners’ level of academic English literacy.
{"title":"Towards Improved Assessment of L2 Collocation Knowledge","authors":"Senyung Lee, Sun-young Shin","doi":"10.1080/15434303.2021.1908295","DOIUrl":"https://doi.org/10.1080/15434303.2021.1908295","url":null,"abstract":"ABSTRACT Multiple test tasks are available for assessing L2 collocation knowledge. However, few studies have investigated the characteristics of a variety of recognition and recall tasks of collocation simultaneously, and most research on L2 collocations has focused on verb-noun and adjective-noun collocations. This study investigates (1) the relative informativeness of different tasks for assessing L2 collocation knowledge and (2) the effect of collocation type on learners’ scores on collocation tasks. Four tasks were developed based on an extensive review of research on L2 collocations: a sentence writing task, fill-in-the-blank task, multiple-choice task, and Yes/No acceptability judgment task. Each task targeted 64 English collocations, including verb-noun, adjective-noun, adverb-adjective, and adverb-verb collocations. Four groups of adult ESL learners representing different levels of academic English literacy (n = 205) completed the tasks. An item response theory analysis showed that the sentence writing and fill-in-the-blank-tasks had similar difficulty and discriminating power, the eight-option multiple-choice task had the highest discriminating power, and the Yes/No judgment task had the lowest difficulty and discriminating power. The type of collocation did not have a significant effect on learners’ scores when collocation frequency was held constant, regardless of task and learners’ level of academic English literacy.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1908295","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42215055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-25DOI: 10.1080/15434303.2021.1903470
Qin Xie, Yuqing Lei
ABSTRACT This research conducted diagnostic assessment of problems in first-year undergraduates’ English academic papers and tracked potential sources of the problems to the writing process and strategy use. Data collected include 339 term papers and interviews with 17 students. The samples were manually error tagged and marked against a detailed diagnostic checklist. The resultant textual features were then compared between two subgroups of Chinese students in the sample, namely, those graduating from local schools in Hong Kong (LS) and those coming from the mainland and sojourning in Hong Kong (MS). The analyses found both groups had the poorest performance in source integration and vocabulary use. LS used simpler words and made more grammatical errors, whereas MS attempted sophisticated vocabulary more successfully and used a wider variety of words and sentence structures. The difficulties they experienced, however, were rather similar, residing mainly at the researching, planning and formulating stages. Action control theory was introduced to interpret the self-regulatory strategies they adopted to cope with perceived difficulties during the writing process. Strategies to control goals, control resources, and control cognitive load were found to be the most typical. While these strategies could reduce their difficulties, only some seemed also to help with performance. A conceptual framework is proposed at the end to link writing products, process and self-regulatory control strategies as evidenced in the study. Four diagnoses are drawn with suggestions for practice and further research.
{"title":"Diagnostic Assessment of L2 Academic Writing Product, Process and Self-regulatory Strategy Use with a Comparative Dimension","authors":"Qin Xie, Yuqing Lei","doi":"10.1080/15434303.2021.1903470","DOIUrl":"https://doi.org/10.1080/15434303.2021.1903470","url":null,"abstract":"ABSTRACT This research conducted diagnostic assessment of problems in first-year undergraduates’ English academic papers and tracked potential sources of the problems to the writing process and strategy use. Data collected include 339 term papers and interviews with 17 students. The samples were manually error tagged and marked against a detailed diagnostic checklist. The resultant textual features were then compared between two subgroups of Chinese students in the sample, namely, those graduating from local schools in Hong Kong (LS) and those coming from the mainland and sojourning in Hong Kong (MS). The analyses found both groups had the poorest performance in source integration and vocabulary use. LS used simpler words and made more grammatical errors, whereas MS attempted sophisticated vocabulary more successfully and used a wider variety of words and sentence structures. The difficulties they experienced, however, were rather similar, residing mainly at the researching, planning and formulating stages. Action control theory was introduced to interpret the self-regulatory strategies they adopted to cope with perceived difficulties during the writing process. Strategies to control goals, control resources, and control cognitive load were found to be the most typical. While these strategies could reduce their difficulties, only some seemed also to help with performance. A conceptual framework is proposed at the end to link writing products, process and self-regulatory control strategies as evidenced in the study. Four diagnoses are drawn with suggestions for practice and further research.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1903470","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43872652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1080/15434303.2021.1874382
Daniel R. Isbell
ABSTRACT The purpose of diagnostic language assessment is to identify learner strengths and weaknesses so that subsequent learning activity can be planned according to learner needs, a purpose which aligns with pedagogical recommendations to individualize instruction. Indeed, the extent to which diagnostic assessment can inform the selection of instructional targets and in turn promote individual linguistic development is critical to validity. In this mixed-methods study, I report on evidence for the use of a new second language Korean pronunciation diagnostic. I first compare the self-assessments and diagnostic scores of 198 learners to consider the potential to beneficially raise awareness of strengths and weaknesses. I supplement these findings with analysis of 21 learners’ reactions to score reports. Next, I focus on the learning activity and learning gains of a subset of 14 learners whom I interviewed and retested approximately 3 months after receiving their initial feedback. Results indicated that (a) many learners had gaps in their self-assessments of pronunciation, (b) learners readily understood the meaning and intended purpose of diagnostic information (i.e., to guide learning activity), and (c) learners who sustained application of diagnostic information in their self-directed learning efforts could make measurable improvements to their pronunciation.
{"title":"Can the Test Support Student Learning? Validating the Use of a Second Language Pronunciation Diagnostic","authors":"Daniel R. Isbell","doi":"10.1080/15434303.2021.1874382","DOIUrl":"https://doi.org/10.1080/15434303.2021.1874382","url":null,"abstract":"ABSTRACT The purpose of diagnostic language assessment is to identify learner strengths and weaknesses so that subsequent learning activity can be planned according to learner needs, a purpose which aligns with pedagogical recommendations to individualize instruction. Indeed, the extent to which diagnostic assessment can inform the selection of instructional targets and in turn promote individual linguistic development is critical to validity. In this mixed-methods study, I report on evidence for the use of a new second language Korean pronunciation diagnostic. I first compare the self-assessments and diagnostic scores of 198 learners to consider the potential to beneficially raise awareness of strengths and weaknesses. I supplement these findings with analysis of 21 learners’ reactions to score reports. Next, I focus on the learning activity and learning gains of a subset of 14 learners whom I interviewed and retested approximately 3 months after receiving their initial feedback. Results indicated that (a) many learners had gaps in their self-assessments of pronunciation, (b) learners readily understood the meaning and intended purpose of diagnostic information (i.e., to guide learning activity), and (c) learners who sustained application of diagnostic information in their self-directed learning efforts could make measurable improvements to their pronunciation.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1874382","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41574306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}