Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100728
Mark Feng Teng , Ying Zhan
The present study focused on the assessment of how task complexity and learner variables (English proficiency level, self-regulated writing strategies, and writing self-efficacy belief) influence English academic writing for students in a foreign language context. The participants were 270 students from a medium-sized university in China. All participants completed measures on self-regulated writing strategies, self-efficacy, and an academic writing test. Guided research questions aimed to explore the extent to which task complexity and English proficiency level influenced writing performance along with how learners’ self-efficacy and self-regulated writing strategies mediated the role of task complexity in academic writing performance. Structural equation modelling results showed that task complexity and English proficiency level influenced learners’ writing performance. Self-efficacy beliefs and the use of self-regulated writing strategies mediated the role of task complexity on academic writing performance. Implications related to the assessment of task complexity, self-regulated writing strategies, self-efficacy, and academic writing were discussed.
{"title":"Assessing self-regulated writing strategies, self-efficacy, task complexity, and performance in English academic writing","authors":"Mark Feng Teng , Ying Zhan","doi":"10.1016/j.asw.2023.100728","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100728","url":null,"abstract":"<div><p>The present study focused on the assessment of how task complexity and learner variables (English proficiency level, self-regulated writing strategies, and writing self-efficacy belief) influence English academic writing for students in a foreign language context. The participants were 270 students from a medium-sized university in China. All participants completed measures on self-regulated writing strategies, self-efficacy, and an academic writing test. Guided research questions aimed to explore the extent to which task complexity and English proficiency level influenced writing performance along with how learners’ self-efficacy and self-regulated writing strategies mediated the role of task complexity in academic writing performance. Structural equation modelling results showed that task complexity and English proficiency level influenced learners’ writing performance. Self-efficacy beliefs and the use of self-regulated writing strategies mediated the role of task complexity on academic writing performance. Implications related to the assessment of task complexity, self-regulated writing strategies, self-efficacy, and academic writing were discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100771
Yuhua Liu , Jianda Liu
The study utilized a mixed methods approach to compare the scoring of raters in assessing writing performance across three modes: paper-based, on-screen marking of scanned images, and online word-processed versions. six experienced raters evaluated the performances of 39 test-takers in each mode. The many-facet Rasch model was employed to analyze scoring differences among the rating modes; the semi-structured interview was used to collect raters' perceptions towards performance under the three modes. The findings indicated that the difficulty level was ranked in ascending order of on-screen marking of scanned images, paper-based text, and online word-processed text. Bias analysis revealed interactions between the rater and the mode, as well as between the criterion and the mode. Verbal reports from the raters highlighted four construct-irrelevant factors that could potentially influence scoring under the three modes: convenience for essay overview and word recognition, potential underestimation of word count, and raters' preference for essays in handwriting. Based on the results, recommendations were provided for rater training and essay scoring across different modes.
{"title":"Comparing computer-based and paper-based rating modes in an English writing test","authors":"Yuhua Liu , Jianda Liu","doi":"10.1016/j.asw.2023.100771","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100771","url":null,"abstract":"<div><p>The study utilized a mixed methods approach to compare the scoring of raters in assessing writing performance across three modes: paper-based, on-screen marking of scanned images, and online word-processed versions. six experienced raters evaluated the performances of 39 test-takers in each mode. The many-facet Rasch model was employed to analyze scoring differences among the rating modes; the semi-structured interview was used to collect raters' perceptions towards performance under the three modes. The findings indicated that the difficulty level was ranked in ascending order of on-screen marking of scanned images, paper-based text, and online word-processed text. Bias analysis revealed interactions between the rater and the mode, as well as between the criterion and the mode. Verbal reports from the raters highlighted four construct-irrelevant factors that could potentially influence scoring under the three modes: convenience for essay overview and word recognition, potential underestimation of word count, and raters' preference for essays in handwriting. Based on the results, recommendations were provided for rater training and essay scoring across different modes.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49818284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100760
Kioumars Razavipour
The mainstream approach to teacher assessment literacy seems to be founded on a (post)positivist paradigm leading to an autonomous model of literacy comprised of generic knowledge and skills. This paradigm obscures the non-cognitive, embodied, and affective dimensions of assessment practices. In this conceptual inquiry, I use the New Materialist philosophy to make sense of writing assessment literacy and feedback practices. In New Materialisms, the materiality of everything is emphasized, ontology is flat, reality is becoming, agency is relational, knowledge is entangled practice, and language is a resource in communicative assemblage. Using the noted conceptual tools, I try to provide a materialized conceptualization of writing assessment and feedback practice arguing that from a New Materialist perspective, feedback practices are an assemblage of rhetoric, IELTS, institution, materiality, art, cross-lingual resources, social relations, affect, and embodiment.
{"title":"Classroom writing assessment and feedback practices: A new materialist encounter","authors":"Kioumars Razavipour","doi":"10.1016/j.asw.2023.100760","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100760","url":null,"abstract":"<div><p>The mainstream approach to teacher assessment literacy seems to be founded on a (post)positivist paradigm leading to an autonomous model of literacy comprised of generic knowledge and skills. This paradigm obscures the non-cognitive, embodied, and affective dimensions of assessment practices. In this conceptual inquiry, I use the New Materialist philosophy to make sense of writing assessment literacy and feedback practices. In New Materialisms, the materiality of everything is emphasized, ontology is flat, reality is becoming, agency is relational, knowledge is entangled practice, and language is a resource in communicative assemblage. Using the noted conceptual tools, I try to provide a materialized conceptualization of writing assessment and feedback practice arguing that from a New Materialist perspective, feedback practices are an assemblage of rhetoric, IELTS, institution, materiality, art, cross-lingual resources, social relations, affect, and embodiment.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100743
Icy Lee , Mehmet Karaca , Serhat Inan
Feedback literate teachers play a central role in promoting students’ writing performance. In L2 writing, however, there is a paucity of research on teacher feedback literacy, and instruments that investigate L2 writing teacher feedback literacy are virtually non-existent. Heeding the call for research on scale development to measure teacher feedback literacy, this two-phase study is an attempt to develop and validate a feedback literacy scale (FLS) for teachers to illuminate this budding concept in L2 writing. The factor structure of the 34-item FLS was determined through exploratory factor analysis (EFA) with the participation of 223 writing teachers. The results revealed a three-factor solution, namely Perceived Knowledge, Values, and Perceived Skills. A confirmatory factor analysis (CFA) was employed which aimed to verify the structure of the scale and its three sub-scales, based on a sample of another 208 writing teachers. It was found that the model fits the data well (e.g., the RMSEA with 0.052 (90 % CI=0.045–0.059)), proving that the FLS yields psychometrically reliable and valid results, and it is a robust scale to measure the self-reported feedback literacy of L2 writing teachers. In light of these findings, the factor structure and sub-scales of the FLS are discussed. Practical implications for teachers, teacher trainers and teacher education programs, as well as implications for feedback literacy research are provided.
{"title":"The development and validation of a scale on L2 writing teacher feedback literacy","authors":"Icy Lee , Mehmet Karaca , Serhat Inan","doi":"10.1016/j.asw.2023.100743","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100743","url":null,"abstract":"<div><p><span>Feedback literate teachers play a central role in promoting students’ writing performance. In L2 writing, however, there is a paucity of research on teacher feedback literacy, and instruments that investigate L2 writing teacher feedback literacy are virtually non-existent. Heeding the call for research on scale development to measure teacher feedback literacy, this two-phase study is an attempt to develop and validate a feedback literacy scale (FLS) for teachers to illuminate this budding concept in L2 writing. The factor structure of the 34-item FLS was determined through exploratory factor analysis (EFA) with the participation of 223 writing teachers. The results revealed a three-factor solution, namely </span><em>Perceived Knowledge</em>, <em>Values</em>, and <em>Perceived Skills</em><span>. A confirmatory factor analysis<span> (CFA) was employed which aimed to verify the structure of the scale and its three sub-scales, based on a sample of another 208 writing teachers. It was found that the model fits the data well (e.g., the RMSEA with 0.052 (90 % CI=0.045–0.059)), proving that the FLS yields psychometrically reliable and valid results, and it is a robust scale to measure the self-reported feedback literacy of L2 writing teachers. In light of these findings, the factor structure and sub-scales of the FLS are discussed. Practical implications for teachers, teacher trainers and teacher education programs, as well as implications for feedback literacy research are provided.</span></span></p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49818280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100742
Zhibin Shan , Hua Yang , Hao Xu
This qualitative study explores the influence of curriculum configuration as an instructional context on teachers’ L2 writing assessment literacy and practices. Specifically, the study examined how a group of university French language teachers in China drew on their assessment literacy to assess student’s embedded writing situated in an integrated language course for beginning French learners. Findings show that the curriculum configuration that prioritised student’s language knowledge acquisition over writing development seemed to cause the teachers to adjust their writing assessment to reconcile student’s acquisition of language knowledge and development of writing skills. Whilst teachers’ assessment literacy did not seem to be affected, their assessment practices showed a sequenced “split” between assessments on language issues and writing issues, which were separated and then ordered according to their perceived importance. Teachers’ beliefs about learners’ general learning needs, rather than those for learning L2 writing, seemed to determine how teachers navigated the sequence of the split assessments to order their priorities.
{"title":"The mediating role of curriculum configuration on teacher’s L2 writing assessment literacy and practices in embedded French writing","authors":"Zhibin Shan , Hua Yang , Hao Xu","doi":"10.1016/j.asw.2023.100742","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100742","url":null,"abstract":"<div><p>This qualitative study<span> explores the influence of curriculum configuration as an instructional context on teachers’ L2 writing assessment literacy and practices. Specifically, the study examined how a group of university French language teachers in China drew on their assessment literacy to assess student’s embedded writing situated in an integrated language course for beginning French learners. Findings show that the curriculum configuration that prioritised student’s language knowledge acquisition over writing development seemed to cause the teachers to adjust their writing assessment to reconcile student’s acquisition of language knowledge and development of writing skills. Whilst teachers’ assessment literacy did not seem to be affected, their assessment practices showed a sequenced “split” between assessments on language issues and writing issues, which were separated and then ordered according to their perceived importance. Teachers’ beliefs about learners’ general learning needs, rather than those for learning L2 writing, seemed to determine how teachers navigated the sequence of the split assessments to order their priorities.</span></p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49818281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100746
Albert W. Li
Peerceptiv is a peer assessment tool developed by learning sciences researchers to help students demonstrate disciplinary knowledge through writing feedback practices. This review of Peerceptiv describes its key features while comparing it with other writing feedback tools and suggesting possibilities and limitations of using it to support AI-based online writing assessment across the disciplines. Future considerations regarding the use of Peerceptiv in assessing, teaching, and researching online writing are discussed.
{"title":"Using Peerceptiv to support AI-based online writing assessment across the disciplines","authors":"Albert W. Li","doi":"10.1016/j.asw.2023.100746","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100746","url":null,"abstract":"<div><p><em>Peerceptiv</em> is a peer assessment tool developed by learning sciences researchers to help students demonstrate disciplinary knowledge through writing feedback practices. This review of <em>Peerceptiv</em> describes its key features while comparing it with other writing feedback tools and suggesting possibilities and limitations of using it to support AI-based online writing assessment across the disciplines. Future considerations regarding the use of <em>Peerceptiv</em> in assessing, teaching, and researching online writing are discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49818289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100739
Peisha Wu , Shulin Yu , Yanqi Luo
While recent years have witnessed increasing theoretical and empirical elaboration on the construct of teacher feedback literacy in higher education and second language education, little research has investigated the development of teacher feedback literacy, especially when teachers collaborate in an attempt to improve feedback strategies with technology. To fill this gap, the present study examined two L2 writing teachers taking the initiative to create, update, and implement a human-computer-automatic writing evaluation (AWE) integral feedback platform, and how such a feedback innovation process impacted their feedback literacy development. The analysis of multiple sources of data, including semi-structured interviews, stimulated recalls, class observation, and artifacts, revealed that the two teachers approached the innovation by orchestrating mediating tools, interacting dialogically with social agents, reflecting critically, and crossing boundaries. Through this process, the development of teacher feedback literacy occurred at varying rates across different aspects. Specifically, positive changes were effected in the teachers’ feedback thinking as well as feedback giving and sharing practices. However, the teachers’ feedback literacy in classroom practice did not seem to have generated as salient a positive outcome. Possible reasons are discussed regarding the scope of the feedback innovation and contextual constraints, and implications are offered. The study underscored L2 writing teacher feedback literacy as a developmental phenomenon molded by situated social practice.
{"title":"The development of teacher feedback literacy in situ: EFL writing teachers’ endeavor to human-computer-AWE integral feedback innovation","authors":"Peisha Wu , Shulin Yu , Yanqi Luo","doi":"10.1016/j.asw.2023.100739","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100739","url":null,"abstract":"<div><p>While recent years have witnessed increasing theoretical and empirical elaboration on the construct of teacher feedback literacy in higher education and second language education, little research has investigated the development of teacher feedback literacy, especially when teachers collaborate in an attempt to improve feedback strategies with technology. To fill this gap, the present study examined two L2 writing teachers taking the initiative to create, update, and implement a human-computer-automatic writing evaluation (AWE) integral feedback platform, and how such a feedback innovation process impacted their feedback literacy development. The analysis of multiple sources of data, including semi-structured interviews, stimulated recalls, class observation, and artifacts, revealed that the two teachers approached the innovation by orchestrating mediating tools, interacting dialogically with social agents, reflecting critically, and crossing boundaries. Through this process, the development of teacher feedback literacy occurred at varying rates across different aspects. Specifically, positive changes were effected in the teachers’ feedback thinking as well as feedback giving and sharing practices. However, the teachers’ feedback literacy in classroom practice did not seem to have generated as salient a positive outcome. Possible reasons are discussed regarding the scope of the feedback innovation and contextual constraints, and implications are offered. The study underscored L2 writing teacher feedback literacy as a developmental phenomenon molded by situated social practice.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49858733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100756
Ana Castaño Arques , Carmen López Ferrero
Learning how to write occluded genres is an elusive task (Swales, 1996) – even more so in the case of students writing in a second or additional language. To achieve discourse competence in the use of one of these genres, in this case the ‘statement of purpose’ typical of post-graduate programme admission forms, it is first necessary to fully understand its features at both the macrotextual and microlinguistic levels (Gillaerts, 2003; Bhatia, 2004). This qualitative study focuses on the writing of learners of Spanish as an additional language to analyse whether feedback provided by peers impacts the quality of the statements of purpose they write. Through a dual discourse analysis of their written work and in-class interactions during peer- feedback sessions, our study finds that, when properly trained and using tailored assessment tools, students can use peer-assessment profitably to improve the quality of their statements of purpose, as well as to acquire appropriate metalanguage to guide others. Our results thus reconfirm the beneficial effects of helping students to achieve feedback literacy.
{"title":"Peer-feedback of an occluded genre in the Spanish language classroom: A case study","authors":"Ana Castaño Arques , Carmen López Ferrero","doi":"10.1016/j.asw.2023.100756","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100756","url":null,"abstract":"<div><p>Learning how to write occluded genres is an elusive task (Swales, 1996) – even more so in the case of students writing in a second or additional language. To achieve discourse competence in the use of one of these genres, in this case the ‘statement of purpose’ typical of post-graduate programme admission forms, it is first necessary to fully understand its features at both the macrotextual and microlinguistic levels (Gillaerts, 2003; Bhatia, 2004). This qualitative study focuses on the writing of learners of Spanish as an additional language to analyse whether feedback provided by peers impacts the quality of the statements of purpose they write. Through a dual discourse analysis of their written work and in-class interactions during peer- feedback sessions, our study finds that, when properly trained and using tailored assessment tools, students can use peer-assessment profitably to improve the quality of their statements of purpose, as well as to acquire appropriate metalanguage to guide others. Our results thus reconfirm the beneficial effects of helping students to achieve feedback literacy.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49817980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1016/j.asw.2023.100759
Peng Wu , Chunlin Lei
Although a wealth of studies has been conducted to explore how to develop student feedback literacy, the development of postgraduates’ feedback literacy is under-explored. To bridge the gap, this study aims to investigate the characteristics of developing feedback literacy among postgraduates in an academic writing class. Based on the theoretical framework proposed by Yu and Liu (2021), the study designed two iterations and drew data from a survey, students’ multi-draft writing, peer feedback, students’ reflective journals, and discussion scenarios in dialogues. It was found students developed a dynamic view of feedback, enriched strategies to solve cognitive conflicts, and became emotionally resilient to feedback at the end of iteration two. The study also manifested dialogues helped to develop student feedback literacy and solve cognitive conflicts by clarifying misconceptions, formulating revising actions and co-constructing new ideas on writing and assessment. This study has pedagogical implications on designing feedback processes to facilitate student feedback literacy in academic writing classes.
{"title":"Developing feedback literacy through dialogue-supported performances of multi-draft writing in a postgraduate class","authors":"Peng Wu , Chunlin Lei","doi":"10.1016/j.asw.2023.100759","DOIUrl":"https://doi.org/10.1016/j.asw.2023.100759","url":null,"abstract":"<div><p><span>Although a wealth of studies has been conducted to explore how to develop student feedback literacy, the development of postgraduates’ feedback literacy is under-explored. To bridge the gap, this study aims to investigate the characteristics of developing feedback literacy among postgraduates in an academic writing class. Based on the theoretical framework proposed by </span><span>Yu and Liu (2021)</span>, the study designed two iterations and drew data from a survey, students’ multi-draft writing, peer feedback, students’ reflective journals, and discussion scenarios in dialogues. It was found students developed a dynamic view of feedback, enriched strategies to solve cognitive conflicts, and became emotionally resilient to feedback at the end of iteration two. The study also manifested dialogues helped to develop student feedback literacy and solve cognitive conflicts by clarifying misconceptions, formulating revising actions and co-constructing new ideas on writing and assessment. This study has pedagogical implications on designing feedback processes to facilitate student feedback literacy in academic writing classes.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49818278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}