首页 > 最新文献

Assessment in Education-Principles Policy & Practice最新文献

英文 中文
Complementary strengths? Evaluation of a hybrid human-machine scoring approach for a test of oral academic English 互补优势?学术英语口语测试中人机混合评分方法的评价
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.1979466
Larry Davis, S. Papageorgiou
ABSTRACT Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that explored the possibility of combining human and machine scores using responses from the speaking section of the TOEFL iBT® test. Human raters awarded scores for three sub-constructs: delivery, language use and topic development. The SpeechRaterSM automated scoring system produced scores for delivery and language use. Composite scores computed from three different combinations of human and automated analytic scores were equally or more reliable than human holistic scores, probably due to the inclusion of multiple observations in composite scores. However, composite scores calculated solely from human analytic scores showed the highest reliability and reliability steadily decreased as more machine scores replaced human scores.
人类评分员和机器评分系统在评估语言能力方面具有潜在的互补优势;具体来说,有人建议自动化系统可以用来对特定的语言现象进行一致的测量,而人类则评估更全面的表现。我们报告了一项实证研究,该研究利用托福网考口语部分的回答,探索了将人类和机器分数结合起来的可能性。人类评分者为三个子结构打分:表达、语言使用和主题发展。SpeechRaterSM自动评分系统为演讲和语言使用打分。从人类和自动分析得分的三种不同组合计算的综合得分与人类整体得分相同或更可靠,可能是由于在综合得分中包含了多个观察结果。然而,仅从人类分析得分计算的综合得分显示出最高的可靠性,随着更多的机器得分取代人类得分,可靠性稳步下降。
{"title":"Complementary strengths? Evaluation of a hybrid human-machine scoring approach for a test of oral academic English","authors":"Larry Davis, S. Papageorgiou","doi":"10.1080/0969594X.2021.1979466","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1979466","url":null,"abstract":"ABSTRACT Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that explored the possibility of combining human and machine scores using responses from the speaking section of the TOEFL iBT® test. Human raters awarded scores for three sub-constructs: delivery, language use and topic development. The SpeechRaterSM automated scoring system produced scores for delivery and language use. Composite scores computed from three different combinations of human and automated analytic scores were equally or more reliable than human holistic scores, probably due to the inclusion of multiple observations in composite scores. However, composite scores calculated solely from human analytic scores showed the highest reliability and reliability steadily decreased as more machine scores replaced human scores.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"36 1","pages":"437 - 455"},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83110288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Use of innovative technology in oral language assessment 运用创新科技进行口语评估
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.2004530
Fumiyo Nakatsuhara, Vivien Berry
The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment o
《教育评估:原则、政策与实践》第一期特刊(第10卷,第3期,2003年出版)的主题是“数字时代的评估”。该特刊的社论指出,该卷的目的是“提请国际评估界注意数字技术与评估之间的一系列潜在和实际关系”(McFarlane, 2003年,第261页)。从那时起,毫无疑问,数字技术在评估中的作用比任何评估研究人员和实践者所预期的更加动态地发展。特别是,近年来科技的指数级进步和高速互联网的日益普及,不仅改变了我们在社交、专业和教育环境中口头交流的方式,也改变了我们评估口语的方式。在近二十年后重新审视同一主题,但特别从口语评估的角度出发,本期特刊提出了概念和实证论文,讨论了最新创新能力提供的机遇和挑战。口头语言评估的现状可以通过数字技术的发展和使用的许多例子来描述(Sawaki, 2022;虽然这些创新为以前不可能完成的各种口语测试任务打开了大门,并为语言测试从业者提供了更有效的方式来交付和评分测试,但应该记住,“技术提供的每一种支持也提出了一系列需要解决的新问题”(Chapelle, 2018)。这并不意味着我们应该过度关注或怀疑技术介导的评估;这仅仅意味着需要更大的透明度。应该向测试用户提供有关在语言测试中使用创新技术的最新信息和适当指导,更重要的是,从考生那里获得哪些语言技能以及如何衡量这些技能,以便他们能够接受并批判性地参与该领域快速发展的发展(另见Khabbazbashi等人,2021;Litman et al., 2018)。因此,本期特刊旨在促进和鼓励国际考试界的考试研究人员、从业者和用户就最近的研究进行透明的对话,这些研究调查了技术介导的口头语言评估中的交付方法和评分方法。在本卷的七篇文章中,前三篇是关于口语测试交付技术的应用。在开篇文章中,Ockey和Neiriz提供了一篇概念性论文,研究了过去三十年中使用的五种技术提供的口头交流评估模型。借鉴Bachman和Palmer(1996)测试有用性的质量,Ockey和Hirch(2020)对英语作为通用语言(ELF)框架的评估,以及Harding和McNamara(2018)对ELF及其与语言评估结构的关系的研究,Ockey和Neiriz提出了教育评估:原则,政策和实践2021,VOL. 28, NO. 1。4,343 - 349 https://doi.org/10.1080/0969594X.2021.2004530
{"title":"Use of innovative technology in oral language assessment","authors":"Fumiyo Nakatsuhara, Vivien Berry","doi":"10.1080/0969594X.2021.2004530","DOIUrl":"https://doi.org/10.1080/0969594X.2021.2004530","url":null,"abstract":"The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment o","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"86 13 1","pages":"343 - 349"},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84005088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Assessing L2 English speaking using automated scoring technology: examining automarker reliability 使用自动评分技术评估第二语言英语口语:检查自动评分的可靠性
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.1979467
Jing Xu, Edmund Jones, V. Laxton, E. Galaczi
ABSTRACT Recent advances in machine learning have made automated scoring of learner speech widespread, and yet validation research that provides support for applying automated scoring technology to assessment is still in its infancy. Both the educational measurement and language assessment communities have called for greater transparency in describing scoring algorithms and research evidence about the reliability of automated scoring. This paper reports on a study that investigated the reliability of an automarker using candidate responses produced in an online oral English test. Based on ‘limits of agreement’ and multi-faceted Rasch analyses on automarker scores and individual examiner scores, the study found that the automarker, while exhibiting excellent internal consistency, was slightly more lenient than examiner fair average scores, particularly for low-proficiency speakers. Additionally, it was found that an automarker uncertainty measure termed Language Quality, which indicates the confidence of speech recognition, was useful for predicting automarker reliability and flagging abnormal speech.
机器学习的最新进展使学习者语音的自动评分得到广泛应用,然而,为将自动评分技术应用于评估提供支持的验证研究仍处于起步阶段。教育测量和语言评估社区都呼吁在描述评分算法和关于自动评分可靠性的研究证据方面提高透明度。本文报告了一项研究,该研究利用在线英语口语测试中产生的候选回答来调查自动标记的可靠性。基于“一致性限制”和对自动评分者分数和个别考官分数的多方面拉希分析,该研究发现,自动评分者虽然表现出出色的内部一致性,但比考官公平的平均分数稍微宽松一些,尤其是对低水平的说话者。此外,我们还发现一种被称为语言质量的自动标记不确定性度量,它表示语音识别的置信度,对于预测自动标记的可靠性和标记异常语音是有用的。
{"title":"Assessing L2 English speaking using automated scoring technology: examining automarker reliability","authors":"Jing Xu, Edmund Jones, V. Laxton, E. Galaczi","doi":"10.1080/0969594X.2021.1979467","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1979467","url":null,"abstract":"ABSTRACT Recent advances in machine learning have made automated scoring of learner speech widespread, and yet validation research that provides support for applying automated scoring technology to assessment is still in its infancy. Both the educational measurement and language assessment communities have called for greater transparency in describing scoring algorithms and research evidence about the reliability of automated scoring. This paper reports on a study that investigated the reliability of an automarker using candidate responses produced in an online oral English test. Based on ‘limits of agreement’ and multi-faceted Rasch analyses on automarker scores and individual examiner scores, the study found that the automarker, while exhibiting excellent internal consistency, was slightly more lenient than examiner fair average scores, particularly for low-proficiency speakers. Additionally, it was found that an automarker uncertainty measure termed Language Quality, which indicates the confidence of speech recognition, was useful for predicting automarker reliability and flagging abnormal speech.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"78 1","pages":"411 - 436"},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74694089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evaluating technology-mediated second language oral communication assessment delivery models 评估技术介导的第二语言口头交流评估交付模式
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.1976106
G. Ockey, Reza Neiriz
ABSTRACT As our understanding of the construct of oral communication (OC) has evolved, so have the possibilities of computer technology undertaking the delivery of tests that measure this ability. It is paramount to understand to what extent such developments lead to accurate, comprehensive, and useful assessment of OC. In this paper, we discuss five models of technology-delivered OC assessment that have appeared in the past three decades. We evaluate these models in terms of how well their respective methods aid in assessing OC. To achieve this aim, we use a framework which takes into account a contemporary view of OC ability, including the call for incorporating English as a lingua franca (ELF) considerations into English language assessment. The evaluation of the five models suggests strengths and weaknesses of each that should be considered when determining which is used for a particular purpose.
随着我们对口语交际(OC)结构的理解不断发展,计算机技术也有可能承担测试口语交际能力的任务。最重要的是要了解这些发展在多大程度上导致对OC的准确、全面和有用的评估。在本文中,我们讨论了过去三十年中出现的技术交付OC评估的五种模型。我们根据它们各自的方法在评估OC方面的帮助程度来评估这些模型。为了实现这一目标,我们使用了一个框架,该框架考虑了当代对英语能力的看法,包括将英语作为通用语言(ELF)考虑纳入英语语言评估的呼吁。对这五种模型的评估表明,在确定用于特定目的时应考虑每种模型的优缺点。
{"title":"Evaluating technology-mediated second language oral communication assessment delivery models","authors":"G. Ockey, Reza Neiriz","doi":"10.1080/0969594X.2021.1976106","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1976106","url":null,"abstract":"ABSTRACT As our understanding of the construct of oral communication (OC) has evolved, so have the possibilities of computer technology undertaking the delivery of tests that measure this ability. It is paramount to understand to what extent such developments lead to accurate, comprehensive, and useful assessment of OC. In this paper, we discuss five models of technology-delivered OC assessment that have appeared in the past three decades. We evaluate these models in terms of how well their respective methods aid in assessing OC. To achieve this aim, we use a framework which takes into account a contemporary view of OC ability, including the call for incorporating English as a lingua franca (ELF) considerations into English language assessment. The evaluation of the five models suggests strengths and weaknesses of each that should be considered when determining which is used for a particular purpose.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"1 1","pages":"350 - 368"},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89507154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Teacher use of digital technologies for school-based assessment: a scoping review 教师使用数字技术进行校本评估:范围审查
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1929828
Christopher N. Blundell
ABSTRACT This paper presents a scoping review of, firstly, how teachers use digital technologies for school-based assessment, and secondly, how these assessment-purposed digital technologies are used in teacher- and student-centred pedagogies. It draws on research about the use of assessment-purposed digital technologies in school settings, published from 2009 to 2019 in peer-reviewed journals and conference proceedings. The findings indicate automated marking and computer- and web-based assessment technologies support established school-based assessment practices, and that game-based and virtual/augmented environments and ePortfolios diversify the modes of assessment and the evidence of learning collected. These technologies improve the efficiency of assessment practices in teacher-centred pedagogies and provide latitude to assess evidence of learning from more diverse modes of engagement in student-centred pedagogies. Current research commonly focuses on validating specific technologies and most commonly relates to automated assessment of closed outcomes within a narrow range of learning areas; these limits indicate opportunities for future research.
本文首先介绍了教师如何使用数字技术进行校本评估,其次介绍了这些以评估为目的的数字技术如何用于以教师和学生为中心的教学法。它借鉴了2009年至2019年在同行评议期刊和会议论文集上发表的关于在学校环境中使用评估目的的数字技术的研究。研究结果表明,自动阅卷以及计算机和基于网络的评估技术为现有的基于学校的评估实践提供了支持,而基于游戏和虚拟/增强环境以及电子作品集使评估模式和所收集的学习证据多样化。这些技术提高了以教师为中心的教学法中评估实践的效率,并为评估以学生为中心的教学法中更多样化的参与模式的学习证据提供了自由度。目前的研究通常侧重于验证特定的技术,最常见的是在狭窄的学习领域内对封闭结果进行自动评估;这些限制表明了未来研究的机会。
{"title":"Teacher use of digital technologies for school-based assessment: a scoping review","authors":"Christopher N. Blundell","doi":"10.1080/0969594X.2021.1929828","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1929828","url":null,"abstract":"ABSTRACT This paper presents a scoping review of, firstly, how teachers use digital technologies for school-based assessment, and secondly, how these assessment-purposed digital technologies are used in teacher- and student-centred pedagogies. It draws on research about the use of assessment-purposed digital technologies in school settings, published from 2009 to 2019 in peer-reviewed journals and conference proceedings. The findings indicate automated marking and computer- and web-based assessment technologies support established school-based assessment practices, and that game-based and virtual/augmented environments and ePortfolios diversify the modes of assessment and the evidence of learning collected. These technologies improve the efficiency of assessment practices in teacher-centred pedagogies and provide latitude to assess evidence of learning from more diverse modes of engagement in student-centred pedagogies. Current research commonly focuses on validating specific technologies and most commonly relates to automated assessment of closed outcomes within a narrow range of learning areas; these limits indicate opportunities for future research.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"86 1","pages":"279 - 300"},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74148434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Conceptualising a Fairness Framework for Assessment Adjusted Practices for Students with Disability: An Empirical Study 对残疾学生评估调整实践的公平框架的概念化:一项实证研究
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1932736
A. Rasooli, Maryam Razmjoee, J. Cumming, E. Dickson, A. Webster
ABSTRACT Given the increasing diversity of teachers and students in 21st century classrooms, fairness is a key consideration in classroom adjusted assessment and instructional practices for students with disability. Despite its significance, little research has attempted to explicitly conceptualise fairness for classroom assessment adjusted practices. The purpose of this study is to leverage the multiple perspectives of secondary school students with disability, their teachers, and parents to build a multi-dimensional framework of fairness for assessment adjusted practices. Open-ended survey data were collected from 60 students with disability, 45 teachers, and 58 parents in four states in Australia and were analyzed using qualitative inductive analysis. The findings present a multidimensional framework for assessment adjusted practices that include interactions across elements of assessment practices, socio-emotional environment, overall conceptions of fairness, and contextual barriers and facilitators. The interactions across these elements influence the learning opportunities and academic outcomes for students with disability.
21世纪课堂中教师和学生的多样性日益增加,公平是课堂调整评估和残疾学生教学实践的关键考虑因素。尽管其意义重大,但很少有研究试图明确界定课堂评估调整实践的公平性。本研究旨在利用残障中学生、残障教师、残障家长的多元视角,构建一个评估调整实践的多维公平框架。本研究收集了澳大利亚四个州的60名残疾学生、45名教师和58名家长的开放式调查数据,并采用定性归纳分析方法进行分析。研究结果为评估调整实践提供了一个多维框架,其中包括评估实践要素之间的相互作用、社会情感环境、公平的总体概念以及情境障碍和促进因素。这些因素之间的相互作用影响着残疾学生的学习机会和学业成绩。
{"title":"Conceptualising a Fairness Framework for Assessment Adjusted Practices for Students with Disability: An Empirical Study","authors":"A. Rasooli, Maryam Razmjoee, J. Cumming, E. Dickson, A. Webster","doi":"10.1080/0969594X.2021.1932736","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1932736","url":null,"abstract":"ABSTRACT Given the increasing diversity of teachers and students in 21st century classrooms, fairness is a key consideration in classroom adjusted assessment and instructional practices for students with disability. Despite its significance, little research has attempted to explicitly conceptualise fairness for classroom assessment adjusted practices. The purpose of this study is to leverage the multiple perspectives of secondary school students with disability, their teachers, and parents to build a multi-dimensional framework of fairness for assessment adjusted practices. Open-ended survey data were collected from 60 students with disability, 45 teachers, and 58 parents in four states in Australia and were analyzed using qualitative inductive analysis. The findings present a multidimensional framework for assessment adjusted practices that include interactions across elements of assessment practices, socio-emotional environment, overall conceptions of fairness, and contextual barriers and facilitators. The interactions across these elements influence the learning opportunities and academic outcomes for students with disability.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"610 1","pages":"301 - 321"},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76263162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes 谁是反馈对象?问责制和质量保证议程对制定反馈程序的影响
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1926221
N. Winstone, D. Carless
ABSTRACT In education systems across the world, teachers are under increasing quality assurance scrutiny in relation to the provision of feedback comments to students. This is particularly pertinent in higher education, where accountability arising from student dissatisfaction with feedback causes concern for institutions. Through semi-structured interviews with twenty-eight educators from a range of institution types, we investigated how educators perceive, interpret, and enact competing functions of feedback. The data demonstrate that educators often experienced professional dissonance where perceived quality assurance requirements conflicted with their own beliefs about the centrality of student learning in feedback processes. Such dissonance arose from the pressure to secure student satisfaction, and avoid complaints. The data also demonstrate that feedback does ‘double duty’ through the requirement to manage competing audiences for feedback comments. Quality enhancement of feedback processes could profitably focus less on teacher inputs and more on evidence of student response to feedback.
在世界各地的教育系统中,教师在向学生提供反馈意见方面受到越来越多的质量保证审查。这在高等教育中尤其重要,因为学生对反馈的不满引起了院校的关注。通过对来自不同机构类型的28位教育工作者的半结构化访谈,我们调查了教育工作者如何感知、解释和制定反馈的竞争功能。数据表明,当感知到的质量保证要求与他们自己关于学生学习在反馈过程中的中心性的信念相冲突时,教育工作者经常经历专业失调。这种不和谐源于确保学生满意和避免抱怨的压力。数据还表明,通过要求管理反馈意见的竞争受众,反馈具有“双重作用”。提高反馈过程的质量可以有益地减少对教师投入的关注,而更多地关注学生对反馈的反应。
{"title":"Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes","authors":"N. Winstone, D. Carless","doi":"10.1080/0969594X.2021.1926221","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1926221","url":null,"abstract":"ABSTRACT In education systems across the world, teachers are under increasing quality assurance scrutiny in relation to the provision of feedback comments to students. This is particularly pertinent in higher education, where accountability arising from student dissatisfaction with feedback causes concern for institutions. Through semi-structured interviews with twenty-eight educators from a range of institution types, we investigated how educators perceive, interpret, and enact competing functions of feedback. The data demonstrate that educators often experienced professional dissonance where perceived quality assurance requirements conflicted with their own beliefs about the centrality of student learning in feedback processes. Such dissonance arose from the pressure to secure student satisfaction, and avoid complaints. The data also demonstrate that feedback does ‘double duty’ through the requirement to manage competing audiences for feedback comments. Quality enhancement of feedback processes could profitably focus less on teacher inputs and more on evidence of student response to feedback.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"86 1","pages":"261 - 278"},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73363032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Who is feedback for? 谁是反馈对象?
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1975996
Therese N. Hopfenbeck
The articles in this regular issue look at different forms of assessment practices such as grading and feedback and how stakeholders interact with the outcomes of these practices. The first article presents a research study from Sweden on holistic and analytic grading. As grades are the main criteria for selecting schools for higher education, and they are based upon teachers’ judgement, grading is rather high stakes for students in Sweden. Johnson et al. (this issue) set up an experimental study where Swedish teachers were randomly assigned to two different conditions (i.e. analytic or holistic grading), in either English as a foreign language (EFL) or mathematics. The research study was conducted online, with only grades and written justification from the teachers collected by the research team. In the analytic condition, teachers received authentic student responses from four students four times, and were asked to grade these through an Internet-based form. At the end of the semester, teachers were asked to provide an overall grade. In the holistic condition, teachers received all material at one time, and would therefore not be influenced by previous experiences. Findings indicate that analytic grading was preferable to holistic grading in terms of agreement among teachers, with stronger effects found in EFL. Teachers in the analytic conditions made more references to grade levels without specifying criteria, while teachers in the holistic conditions provided more references to criteria in their justifications. Although the participants volunteered for the experiment and it was relatively small, the study offers important empirical results in an area where there are still more questions than solutions. The authors propose further investigations into how to increase agreement between teachers’ grading, including using moderation procedures where teachers could review each other’s grading. In the second article, Yan et al. (this issue) present a systematic review on factors influencing teacher’s intentions and implementations regarding formative assessment. The 52 studies included in the qualitative synthesis discuss issues such as how teachers’ selfefficacy and education and training, influence their intention to conduct formative assessment, and add to previous reviews on implementation of formative assessment. More specifically, it demonstrates how not only contextual but also personal factors need to be taken into consideration when designing school-based support measures or teacher professional development programmes with the aim to promote formative assessment practices. In the article Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes, Winstone & Cardiff (this issue) explore the consequences of the evaluation and accountability measures in higher education in UK, and how it influences and interacts with feedback processes from teachers to students. The study is of imp
本期定期发行的文章着眼于不同形式的评估实践,例如分级和反馈,以及利益相关者如何与这些实践的结果相互作用。第一篇文章介绍了一项来自瑞典的关于整体和分析评分的研究。由于分数是选择高等教育学校的主要标准,而且分数是基于教师的判断,所以在瑞典,分数对学生来说是相当重要的。Johnson等人(本期)建立了一项实验研究,瑞典教师被随机分配到两种不同的条件下(即分析或整体评分),英语作为外语(EFL)或数学。这项研究是在网上进行的,只有研究小组收集的老师的分数和书面证明。在分析条件下,教师从四个学生那里收到四次真实的学生回答,并被要求通过基于互联网的表格对这些回答进行评分。在学期结束时,老师们被要求提供一个总体成绩。在整体条件下,教师一次接受所有的材料,因此不会受到以往经验的影响。研究结果表明,就教师的认同程度而言,分析性评分优于整体评分,在外语教学中效果更明显。在分析条件下,教师更多地参考年级水平,而没有明确的标准,而在整体条件下,教师在其理由中更多地参考标准。虽然参与者是自愿参加实验的,而且规模相对较小,但在一个问题多于解决方案的领域,这项研究提供了重要的实证结果。作者建议进一步研究如何增加教师评分的一致性,包括使用教师可以相互审查评分的调节程序。在第二篇文章中,Yan等人(本期)对影响教师形成性评估意图和实施的因素进行了系统回顾。定性综合中包含的52项研究讨论了教师的自我效能感和教育培训如何影响他们进行形成性评估的意愿等问题,并补充了先前对形成性评估实施情况的审查。更具体地说,它表明在设计以学校为基础的支持措施或教师专业发展方案以促进形成性评估做法时,不仅需要考虑环境因素,还需要考虑个人因素。在文章中,谁是反馈?问责制和质量保证议程对反馈过程制定的影响,Winstone & Cardiff(本期)探讨了英国高等教育中评估和问责制措施的后果,以及它如何影响并与教师对学生的反馈过程相互作用。这项研究很重要,因为我们对当前问责制的意外后果了解较少,这在最好的情况下可以改善教育评估:原则,政策与实践2021,VOL. 28, NO。3, 209-211 https://doi.org/10.1080/0969594X.2021.1975996
{"title":"Who is feedback for?","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2021.1975996","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1975996","url":null,"abstract":"The articles in this regular issue look at different forms of assessment practices such as grading and feedback and how stakeholders interact with the outcomes of these practices. The first article presents a research study from Sweden on holistic and analytic grading. As grades are the main criteria for selecting schools for higher education, and they are based upon teachers’ judgement, grading is rather high stakes for students in Sweden. Johnson et al. (this issue) set up an experimental study where Swedish teachers were randomly assigned to two different conditions (i.e. analytic or holistic grading), in either English as a foreign language (EFL) or mathematics. The research study was conducted online, with only grades and written justification from the teachers collected by the research team. In the analytic condition, teachers received authentic student responses from four students four times, and were asked to grade these through an Internet-based form. At the end of the semester, teachers were asked to provide an overall grade. In the holistic condition, teachers received all material at one time, and would therefore not be influenced by previous experiences. Findings indicate that analytic grading was preferable to holistic grading in terms of agreement among teachers, with stronger effects found in EFL. Teachers in the analytic conditions made more references to grade levels without specifying criteria, while teachers in the holistic conditions provided more references to criteria in their justifications. Although the participants volunteered for the experiment and it was relatively small, the study offers important empirical results in an area where there are still more questions than solutions. The authors propose further investigations into how to increase agreement between teachers’ grading, including using moderation procedures where teachers could review each other’s grading. In the second article, Yan et al. (this issue) present a systematic review on factors influencing teacher’s intentions and implementations regarding formative assessment. The 52 studies included in the qualitative synthesis discuss issues such as how teachers’ selfefficacy and education and training, influence their intention to conduct formative assessment, and add to previous reviews on implementation of formative assessment. More specifically, it demonstrates how not only contextual but also personal factors need to be taken into consideration when designing school-based support measures or teacher professional development programmes with the aim to promote formative assessment practices. In the article Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes, Winstone & Cardiff (this issue) explore the consequences of the evaluation and accountability measures in higher education in UK, and how it influences and interacts with feedback processes from teachers to students. The study is of imp","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"60 1","pages":"209 - 211"},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84655704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The uses and misuses of centralised high stakes examinations-Assessment Policy and Practice in Georgia 集中高风险考试的使用与滥用——格鲁吉亚的评估政策与实践
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-04-04 DOI: 10.1080/0969594X.2021.1900775
Sophia Gorgodze, Lela Chakhaia
ABSTRACT Trust in centralised high-stakes exams in Georgia has grown since 2005, when the introduction of nationwide standardised tests for university entry successfully eradicated the deep-rooted corruption in the admissions system. In 2011, another set of high-stakes exams were introduced for school graduation, resulting in a minimum of 12 exams for secondary school graduation and university entry. The examination system reform in 2019 was limited to abolishing the school graduation exams and reducing the number of university admission exams. Fewer exams instigated the fear of decrease in student motivation and the deterioration of learning outcomes among some stakeholders. This article describes how centralised high-stakes assessments have become an integral part of the education system, cites available evidence on their impact, accounts for recent changes, and argues that overreliance on centralised high-stakes exams is due to complex educational, political and social processes that make it difficult to transform the system.
2005年,格鲁吉亚引入了全国范围的大学入学标准化考试,成功地根除了招生系统中根深蒂固的腐败现象,自此以后,人们对集中的高风险考试的信任与日俱增。2011年,另一套高风险的中学毕业考试被引入,导致中学毕业和大学入学考试至少有12场。2019年的考试制度改革仅限于取消学校毕业考试和减少大学入学考试。考试的减少引发了一些利益相关者对学生动力下降和学习成果恶化的担忧。本文描述了集中的高风险评估如何成为教育系统不可或缺的一部分,引用了有关其影响的现有证据,解释了最近的变化,并认为过度依赖集中的高风险考试是由于复杂的教育、政治和社会过程,这使得系统难以转变。
{"title":"The uses and misuses of centralised high stakes examinations-Assessment Policy and Practice in Georgia","authors":"Sophia Gorgodze, Lela Chakhaia","doi":"10.1080/0969594X.2021.1900775","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1900775","url":null,"abstract":"ABSTRACT Trust in centralised high-stakes exams in Georgia has grown since 2005, when the introduction of nationwide standardised tests for university entry successfully eradicated the deep-rooted corruption in the admissions system. In 2011, another set of high-stakes exams were introduced for school graduation, resulting in a minimum of 12 exams for secondary school graduation and university entry. The examination system reform in 2019 was limited to abolishing the school graduation exams and reducing the number of university admission exams. Fewer exams instigated the fear of decrease in student motivation and the deterioration of learning outcomes among some stakeholders. This article describes how centralised high-stakes assessments have become an integral part of the education system, cites available evidence on their impact, accounts for recent changes, and argues that overreliance on centralised high-stakes exams is due to complex educational, political and social processes that make it difficult to transform the system.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"350 1","pages":"322 - 342"},"PeriodicalIF":3.2,"publicationDate":"2021-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89261196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Signature assessment and feedback practices in the disciplines 在学科中签名评估和反馈实践
IF 3.2 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2021-03-04 DOI: 10.1080/0969594X.2021.1930444
Edd Pitt, Kathleen M. Quinlan
In the main, attention to disciplinary practices has been neglected in assessment and feedback research (Coffey et al., 2011; Cowie & Moreland, 2015). Only recently, the longstanding interest in authentic assessment (e.g. Wiggins, 1989) has re-surfaced in higher education literature on authentic assessment design (Ashford-Rowe et al., 2014; Villarroel et al., 2018) and authentic feedback (Dawson et al., 2020). To address this gap, in our 2019 call for papers for this special issue, we sought articles that would explore the potential of what we called ‘signature’ assessment and feedback practices. Just as signature pedagogies (Shulman, 2005) have directed attention to disciplineand profession-specific teaching practices in higher education, we used the term ‘signature’ to invite researchers and educators to consider discipline-specific assessment and feedback practices. While these signatures will be authentic to a discipline, the term implies that they will be uniquely characteristic of a particular discipline. Thus, we invited researchers and educators to dig deeply into what makes a discipline or profession special and distinct from other fields. Because attention to disciplines has the potential to connect primary and secondary with tertiary education, which is often siloed in its own journals, the call for papers also explicitly sought examples from different levels of education. Two years later, this special issue contains five theoretically framed and grounded empirical papers that: a) situate particular assessment and feedback practices within a discipline; b) analyse how engagement with those assessment and feedback activities allows students to participate more fully or effectively within the disciplinary or professional community, and c) illuminate new aspects of assessment and feedback. We (Quinlan and Pitt, this issue) conclude this special issue with an article that draws on the five empirical papers to construct a taxonomy for advancing research on signature assessment and feedback practices.
总的来说,在评估和反馈研究中忽视了对学科实践的关注(Coffey et al., 2011;Cowie & Moreland, 2015)。直到最近,对真实评估的长期兴趣(例如Wiggins, 1989)才在关于真实评估设计的高等教育文献中重新浮出水面(Ashford-Rowe et al., 2014;Villarroel et al., 2018)和真实反馈(Dawson et al., 2020)。为了解决这一差距,我们在2019年的本期特刊征稿中寻找了一些文章,这些文章将探索我们所谓的“签名”评估和反馈实践的潜力。正如签名教学法(Shulman, 2005)将人们的注意力引向了高等教育中特定学科和专业的教学实践,我们使用“签名”一词来邀请研究人员和教育工作者考虑特定学科的评估和反馈实践。虽然这些签名对于一个学科来说是真实的,但这个术语意味着它们将是特定学科的唯一特征。因此,我们邀请研究人员和教育工作者深入挖掘是什么让一个学科或职业与众不同。由于对学科的关注有可能将中小学教育与高等教育联系起来,而中小学教育往往被孤立在自己的期刊中,因此论文征集也明确地从不同的教育水平中寻找例子。两年后,这个特刊包含了五篇理论框架和基础的实证论文:a)将特定的评估和反馈实践置于一个学科中;B)分析参与这些评估和反馈活动如何使学生更充分或更有效地参与学科或专业社区,c)阐明评估和反馈的新方面。我们(昆兰和皮特,本期)以一篇文章来总结本期特刊,该文章借鉴了五篇实证论文,为推进签名评估和反馈实践的研究构建了一个分类。
{"title":"Signature assessment and feedback practices in the disciplines","authors":"Edd Pitt, Kathleen M. Quinlan","doi":"10.1080/0969594X.2021.1930444","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1930444","url":null,"abstract":"In the main, attention to disciplinary practices has been neglected in assessment and feedback research (Coffey et al., 2011; Cowie & Moreland, 2015). Only recently, the longstanding interest in authentic assessment (e.g. Wiggins, 1989) has re-surfaced in higher education literature on authentic assessment design (Ashford-Rowe et al., 2014; Villarroel et al., 2018) and authentic feedback (Dawson et al., 2020). To address this gap, in our 2019 call for papers for this special issue, we sought articles that would explore the potential of what we called ‘signature’ assessment and feedback practices. Just as signature pedagogies (Shulman, 2005) have directed attention to disciplineand profession-specific teaching practices in higher education, we used the term ‘signature’ to invite researchers and educators to consider discipline-specific assessment and feedback practices. While these signatures will be authentic to a discipline, the term implies that they will be uniquely characteristic of a particular discipline. Thus, we invited researchers and educators to dig deeply into what makes a discipline or profession special and distinct from other fields. Because attention to disciplines has the potential to connect primary and secondary with tertiary education, which is often siloed in its own journals, the call for papers also explicitly sought examples from different levels of education. Two years later, this special issue contains five theoretically framed and grounded empirical papers that: a) situate particular assessment and feedback practices within a discipline; b) analyse how engagement with those assessment and feedback activities allows students to participate more fully or effectively within the disciplinary or professional community, and c) illuminate new aspects of assessment and feedback. We (Quinlan and Pitt, this issue) conclude this special issue with an article that draws on the five empirical papers to construct a taxonomy for advancing research on signature assessment and feedback practices.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"113 1","pages":"97 - 100"},"PeriodicalIF":3.2,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75536229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Assessment in Education-Principles Policy & Practice
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1