X. Zhai, Y. Yin, J. Pellegrino, Kevin C. Haudek, Lehong Shi
{"title":"机器学习在科学评估中的应用:系统综述","authors":"X. Zhai, Y. Yin, J. Pellegrino, Kevin C. Haudek, Lehong Shi","doi":"10.1080/03057267.2020.1735757","DOIUrl":null,"url":null,"abstract":"ABSTRACT Machine learning (ML) is an emergent computerised technology that relies on algorithms built by ‘learning’ from training data rather than ‘instruction’, which holds great potential to revolutionise science assessment. This study systematically reviewed 49 articles regarding ML-based science assessment through a triangle framework with technical, validity, and pedagogical features on three vertices. We found that a majority of the studies focused on the validity vertex, as compared to the other two vertices. The existing studies primarily involve text recognition, classification, and scoring with an emphasis on constructing scientific explanations, with a vast range of human-machine agreement measures. To achieve the agreement measures, most of the studies employed a cross-validation method, rather than self- or split-validation. ML allows complex assessments to be used by teachers without the burden of human scoring, saving both time and cost. Most studies used supervised ML, which relies on extraction of attributes from student work that was first coded by humans to achieve automaticity, rather than semi- or unsupervised ML. We found that 24 studies were explicitly embedded in science learning activities, such as scientific inquiry and argumentation, to provide feedback or learning guidance. This study identifies existing research gaps and suggests that all three vertices of the ML triangle should be addressed in future assessment studies, with an emphasis on the pedagogy and technology features.","PeriodicalId":49262,"journal":{"name":"Studies in Science Education","volume":null,"pages":null},"PeriodicalIF":4.7000,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/03057267.2020.1735757","citationCount":"90","resultStr":"{\"title\":\"Applying machine learning in science assessment: a systematic review\",\"authors\":\"X. Zhai, Y. Yin, J. Pellegrino, Kevin C. Haudek, Lehong Shi\",\"doi\":\"10.1080/03057267.2020.1735757\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Machine learning (ML) is an emergent computerised technology that relies on algorithms built by ‘learning’ from training data rather than ‘instruction’, which holds great potential to revolutionise science assessment. This study systematically reviewed 49 articles regarding ML-based science assessment through a triangle framework with technical, validity, and pedagogical features on three vertices. We found that a majority of the studies focused on the validity vertex, as compared to the other two vertices. The existing studies primarily involve text recognition, classification, and scoring with an emphasis on constructing scientific explanations, with a vast range of human-machine agreement measures. To achieve the agreement measures, most of the studies employed a cross-validation method, rather than self- or split-validation. ML allows complex assessments to be used by teachers without the burden of human scoring, saving both time and cost. Most studies used supervised ML, which relies on extraction of attributes from student work that was first coded by humans to achieve automaticity, rather than semi- or unsupervised ML. We found that 24 studies were explicitly embedded in science learning activities, such as scientific inquiry and argumentation, to provide feedback or learning guidance. This study identifies existing research gaps and suggests that all three vertices of the ML triangle should be addressed in future assessment studies, with an emphasis on the pedagogy and technology features.\",\"PeriodicalId\":49262,\"journal\":{\"name\":\"Studies in Science Education\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2020-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/03057267.2020.1735757\",\"citationCount\":\"90\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in Science Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1080/03057267.2020.1735757\",\"RegionNum\":2,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Science Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/03057267.2020.1735757","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Applying machine learning in science assessment: a systematic review
ABSTRACT Machine learning (ML) is an emergent computerised technology that relies on algorithms built by ‘learning’ from training data rather than ‘instruction’, which holds great potential to revolutionise science assessment. This study systematically reviewed 49 articles regarding ML-based science assessment through a triangle framework with technical, validity, and pedagogical features on three vertices. We found that a majority of the studies focused on the validity vertex, as compared to the other two vertices. The existing studies primarily involve text recognition, classification, and scoring with an emphasis on constructing scientific explanations, with a vast range of human-machine agreement measures. To achieve the agreement measures, most of the studies employed a cross-validation method, rather than self- or split-validation. ML allows complex assessments to be used by teachers without the burden of human scoring, saving both time and cost. Most studies used supervised ML, which relies on extraction of attributes from student work that was first coded by humans to achieve automaticity, rather than semi- or unsupervised ML. We found that 24 studies were explicitly embedded in science learning activities, such as scientific inquiry and argumentation, to provide feedback or learning guidance. This study identifies existing research gaps and suggests that all three vertices of the ML triangle should be addressed in future assessment studies, with an emphasis on the pedagogy and technology features.
期刊介绍:
The central aim of Studies in Science Education is to publish review articles of the highest quality which provide analytical syntheses of research into key topics and issues in science education. In addressing this aim, the Editor and Editorial Advisory Board, are guided by a commitment to:
maintaining and developing the highest standards of scholarship associated with the journal;
publishing articles from as wide a range of authors as possible, in relation both to professional background and country of origin;
publishing articles which serve both to consolidate and reflect upon existing fields of study and to promote new areas for research activity.
Studies in Science Education will be of interest to all those involved in science education including: science education researchers, doctoral and masters students; science teachers at elementary, high school and university levels; science education policy makers; science education curriculum developers and text book writers.
Articles featured in Studies in Science Education have been made available either following invitation from the Editor or through potential contributors offering pieces. Given the substantial nature of the review articles, the Editor is willing to give informal feedback on the suitability of proposals though all contributions, whether invited or not, are subject to full peer review. A limited number of books of special interest and concern to those involved in science education are normally reviewed in each volume.