Pub Date : 2021-04-16DOI: 10.1080/08993408.2021.1914461
S. Hogenboom, F. Hermans, H.L.J. van der Maas
ABSTRACT Background and Context Valid assessment of understanding of programming concepts in primary school children is essential to implement and improve programming education. Objective: We developed and validated the Computerized Adaptive Programming Concepts Test (CAPCT) with a novel application of Item Response Theory. The CAPCT is a web-based and resource-efficient adaptive assessment of 4489 questions measuring: the understanding of basic sequences, loops, conditions (if & if-else statements), debugging, multiple agents, procedures, and the ability to generalize to a new syntax. Method: Data was collected through an existing online adaptive practice and monitoring system called Math Garden. We collected 14 million responses from 93,341 Dutch children (ages 4 - 13). Findings: The CAPCT demonstrated good psychometric qualities because 75% of the variance in question difficulty was explained by differences in item characteristics. The CAPCT demonstrated robustness against adding new participants and adding new items. Differences in player ability (i.e., understanding of CS concepts) were due to differences in age, gender, the number of items played, and prior mathematical ability. Implications: The CAPCT may be used by teachers to identify the level of programming concept understanding of their pupils, while researchers may use the CAPCT to construct and validate effective teaching resources.
{"title":"Computerized adaptive assessment of understanding of programming concepts in primary school children","authors":"S. Hogenboom, F. Hermans, H.L.J. van der Maas","doi":"10.1080/08993408.2021.1914461","DOIUrl":"https://doi.org/10.1080/08993408.2021.1914461","url":null,"abstract":"ABSTRACT Background and Context Valid assessment of understanding of programming concepts in primary school children is essential to implement and improve programming education. Objective: We developed and validated the Computerized Adaptive Programming Concepts Test (CAPCT) with a novel application of Item Response Theory. The CAPCT is a web-based and resource-efficient adaptive assessment of 4489 questions measuring: the understanding of basic sequences, loops, conditions (if & if-else statements), debugging, multiple agents, procedures, and the ability to generalize to a new syntax. Method: Data was collected through an existing online adaptive practice and monitoring system called Math Garden. We collected 14 million responses from 93,341 Dutch children (ages 4 - 13). Findings: The CAPCT demonstrated good psychometric qualities because 75% of the variance in question difficulty was explained by differences in item characteristics. The CAPCT demonstrated robustness against adding new participants and adding new items. Differences in player ability (i.e., understanding of CS concepts) were due to differences in age, gender, the number of items played, and prior mathematical ability. Implications: The CAPCT may be used by teachers to identify the level of programming concept understanding of their pupils, while researchers may use the CAPCT to construct and validate effective teaching resources.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"32 1","pages":"418 - 448"},"PeriodicalIF":2.7,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1914461","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44361951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-03DOI: 10.1080/08993408.2021.1918380
David Weintrop, Daisy W. Rutstein, M. Bienkowski, S. McGee
The last decade has seen rapid growth in the presence of computational thinking (CT) in educational contexts. Those working to advance CT argue that the concepts and skills associated with CT are essential to succeed in an increasingly computational world. As a result of these efforts, CT has a growing presence in K-12 classrooms and beyond. This can be seen in the inclusion of CT in disciplinary standards (e.g. the Next Generation Science Standards and Common Core Math identifying CT as a core practice), as well as national curricular efforts (e.g. the United Kingdom’s national computing curriculum seeks to have students “develop and apply their analytic, problem-solving, design, and computational thinking skills”). Just as CT has a growing presence in formal education, it can also be seen in informal contexts through the growth of computing camps, after-school and library CT programming, and a growing array of toys designed to engage youth with CT. The contemporary discussion around CT began with Wing’s (2006) article, where she argued “to reading, writing, and arithmetic, we should add computational thinking to every child’s analytical ability” (p. 33). However, the conceptual origins have a much longer history, dating back to early work on the Logo programming language and Papert’s insights on the potential of computing as a powerful context for learning (1980). In response to Wing’s article, much effort has been dedicated to trying to define what constitutes CT and where the boundaries of the construct lie. While the community has yet to settle on a single unified definition, there is general consensus that CT includes foundational computing concepts such as abstraction and algorithms, as well as computing practices such as problem decomposition and debugging (Grover & Pea, 2013; Shute et al., 2017). As the dust started to settle from early debates around the scope and nature of CT, a growing number of research projects sought to design CT learning experiences. Spurred in part by an increase in funding for educational projects at the intersection of computing and other disciplines, a space in which CT is particularly well-suited to contribute, the last decade has seen tremendous growth in curricula, learning environments, and innovations around CT education (Tang et al., 2020). In the wake of this growth, this special issue seeks to respond to a question of growing importance: How do we assess computational thinking? This is not a straightforward question to answer as several aspects of CT make it challenging to assess. For example, there is a wide variety of methods by which CT is taught and contexts in which students learn CT. While some schools offer stand-alone CT learning experiences, other schools may try to integrate CT within current subject matters. Further, as discussed above, CT is a relatively ill-defined construct, thus, different assessments may focus on slightly different dimensions of CT. Collectively, this produces a landsca
在过去的十年中,计算思维(CT)在教育环境中的存在迅速增长。那些致力于推进CT的人认为,与CT相关的概念和技能对于在日益计算机化的世界中取得成功至关重要。由于这些努力,CT在K-12及以后的课堂上越来越多地出现。这可以从将CT纳入学科标准(例如,《下一代科学标准》和《共同核心数学》将CT确定为核心实践)以及国家课程努力(例如,英国的国家计算机课程旨在让学生“发展和应用他们的分析、解决问题、设计和计算思维技能”)中看出。正如计算机辅助教育在正规教育中的应用越来越广泛一样,计算机辅助教育在非正式环境中的应用也越来越广泛,比如计算机夏令营、课后和图书馆的计算机辅助教育项目,以及旨在吸引青少年参与计算机辅助教育的各种玩具。当代关于CT的讨论始于Wing(2006)的文章,她认为“除了阅读、写作和算术,我们应该为每个孩子的分析能力增加计算思维”(第33页)。然而,概念起源的历史要长得多,可以追溯到Logo编程语言的早期工作,以及Papert对计算作为学习的强大背景的潜力的见解(1980年)。为了回应Wing的文章,很多人都在努力定义什么是CT,以及这个结构的界限在哪里。虽然社区尚未确定一个统一的定义,但普遍的共识是,CT包括基本的计算概念,如抽象和算法,以及计算实践,如问题分解和调试(Grover & Pea, 2013;Shute et al., 2017)。随着早期关于CT的范围和性质的争论尘埃落定,越来越多的研究项目寻求设计CT学习体验。在计算机和其他学科交叉的教育项目的资金增加的部分刺激下,CT特别适合在这个领域做出贡献,在过去十年中,围绕CT教育的课程、学习环境和创新都有了巨大的增长(Tang et al., 2020)。随着这种增长,本期特刊试图回答一个日益重要的问题:我们如何评估计算思维?这不是一个直截了当的问题,因为CT的几个方面使其难以评估。例如,CT的教学方法和学生学习CT的环境多种多样。虽然有些学校提供独立的CT学习体验,但其他学校可能会尝试将CT整合到当前的学科中。此外,如上所述,CT是一个相对不明确的结构,因此,不同的评估可能侧重于CT的略有不同的维度。总的来说,这产生了一种需要各种评估来反映CT教学的不同概念、背景和动机方面的景观。计算机科学教育2021,第31卷,第31期。2,113 - 116 https://doi.org/10.1080/08993408.2021.1918380
{"title":"Assessing computational thinking: an overview of the field","authors":"David Weintrop, Daisy W. Rutstein, M. Bienkowski, S. McGee","doi":"10.1080/08993408.2021.1918380","DOIUrl":"https://doi.org/10.1080/08993408.2021.1918380","url":null,"abstract":"The last decade has seen rapid growth in the presence of computational thinking (CT) in educational contexts. Those working to advance CT argue that the concepts and skills associated with CT are essential to succeed in an increasingly computational world. As a result of these efforts, CT has a growing presence in K-12 classrooms and beyond. This can be seen in the inclusion of CT in disciplinary standards (e.g. the Next Generation Science Standards and Common Core Math identifying CT as a core practice), as well as national curricular efforts (e.g. the United Kingdom’s national computing curriculum seeks to have students “develop and apply their analytic, problem-solving, design, and computational thinking skills”). Just as CT has a growing presence in formal education, it can also be seen in informal contexts through the growth of computing camps, after-school and library CT programming, and a growing array of toys designed to engage youth with CT. The contemporary discussion around CT began with Wing’s (2006) article, where she argued “to reading, writing, and arithmetic, we should add computational thinking to every child’s analytical ability” (p. 33). However, the conceptual origins have a much longer history, dating back to early work on the Logo programming language and Papert’s insights on the potential of computing as a powerful context for learning (1980). In response to Wing’s article, much effort has been dedicated to trying to define what constitutes CT and where the boundaries of the construct lie. While the community has yet to settle on a single unified definition, there is general consensus that CT includes foundational computing concepts such as abstraction and algorithms, as well as computing practices such as problem decomposition and debugging (Grover & Pea, 2013; Shute et al., 2017). As the dust started to settle from early debates around the scope and nature of CT, a growing number of research projects sought to design CT learning experiences. Spurred in part by an increase in funding for educational projects at the intersection of computing and other disciplines, a space in which CT is particularly well-suited to contribute, the last decade has seen tremendous growth in curricula, learning environments, and innovations around CT education (Tang et al., 2020). In the wake of this growth, this special issue seeks to respond to a question of growing importance: How do we assess computational thinking? This is not a straightforward question to answer as several aspects of CT make it challenging to assess. For example, there is a wide variety of methods by which CT is taught and contexts in which students learn CT. While some schools offer stand-alone CT learning experiences, other schools may try to integrate CT within current subject matters. Further, as discussed above, CT is a relatively ill-defined construct, thus, different assessments may focus on slightly different dimensions of CT. Collectively, this produces a landsca","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"31 1","pages":"113 - 116"},"PeriodicalIF":2.7,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1918380","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44300297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-03DOI: 10.1080/08993408.2021.1903248
M. Guenaga, A. Eguíluz, P. Garaizar, J. Gibaja
ABSTRACT Background and Context: Despite many initiatives to develop Computational Thinking (CT), not much is known about how early programmers develop CT and how we can assess their learning. Objective: Determine if the analysis of students’ interactions with an online platform allows understanding the development of CT, how we can convert data collected into valuable insights, and the aspects that should be considered in platforms design. Method: We developed an online platform with a fine-grained log–recording system. We analysed the data collected from 1004 students (ages 8-14) to understand the difficulties they face. We explain our platform and the tools to process and filter the interaction logs. We calculate additional indicators that provide useful information about student’s behaviour. Findings: Age and gender have shown to influence on CT learning. Generating additional indicators from basic interaction data provide valuable insights. We provide a list of recommendations for developing more effective programming learning platforms.
{"title":"How do students develop computational thinking? Assessing early programmers in a maze-based online game","authors":"M. Guenaga, A. Eguíluz, P. Garaizar, J. Gibaja","doi":"10.1080/08993408.2021.1903248","DOIUrl":"https://doi.org/10.1080/08993408.2021.1903248","url":null,"abstract":"ABSTRACT Background and Context: Despite many initiatives to develop Computational Thinking (CT), not much is known about how early programmers develop CT and how we can assess their learning. Objective: Determine if the analysis of students’ interactions with an online platform allows understanding the development of CT, how we can convert data collected into valuable insights, and the aspects that should be considered in platforms design. Method: We developed an online platform with a fine-grained log–recording system. We analysed the data collected from 1004 students (ages 8-14) to understand the difficulties they face. We explain our platform and the tools to process and filter the interaction logs. We calculate additional indicators that provide useful information about student’s behaviour. Findings: Age and gender have shown to influence on CT learning. Generating additional indicators from basic interaction data provide valuable insights. We provide a list of recommendations for developing more effective programming learning platforms.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"31 1","pages":"259 - 289"},"PeriodicalIF":2.7,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1903248","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46949311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-26DOI: 10.1080/08993408.2021.1874221
Brian D. Gane, Maya Israel, Noor Elagha, Wei Yan, Feiya Luo, J. Pellegrino
ABSTRACT Background & Context: We describe the rationale, design, and initial validation of computational thinking (CT) assessments to pair with curricular lessons that integrate fractions and CT. Objective: We used cognitive models of CT (learning trajectories; LTs) to design assessments and obtained evidence to support a validity argument Method: We used the LTs and evidence-centered design to develop assessments that 144 Grade 3 and Grade 4 elementary students completed following the integrated instruction. We analyzed data using multiple psychometric approaches. Findings: The design approach and data analysis suggest that the assessments are well-suited to evaluate students’ CT knowledge, skills and abilities across multiple LTs. Implications: We show how to use LTs to design assessments that can yield valid inferences about students’ CT competencies; these methods can be adopted and extended by others to create additional assessments that can advance computer science education.
{"title":"Design and validation of learning trajectory-based assessments for computational thinking in upper elementary grades","authors":"Brian D. Gane, Maya Israel, Noor Elagha, Wei Yan, Feiya Luo, J. Pellegrino","doi":"10.1080/08993408.2021.1874221","DOIUrl":"https://doi.org/10.1080/08993408.2021.1874221","url":null,"abstract":"ABSTRACT Background & Context: We describe the rationale, design, and initial validation of computational thinking (CT) assessments to pair with curricular lessons that integrate fractions and CT. Objective: We used cognitive models of CT (learning trajectories; LTs) to design assessments and obtained evidence to support a validity argument Method: We used the LTs and evidence-centered design to develop assessments that 144 Grade 3 and Grade 4 elementary students completed following the integrated instruction. We analyzed data using multiple psychometric approaches. Findings: The design approach and data analysis suggest that the assessments are well-suited to evaluate students’ CT knowledge, skills and abilities across multiple LTs. Implications: We show how to use LTs to design assessments that can yield valid inferences about students’ CT competencies; these methods can be adopted and extended by others to create additional assessments that can advance computer science education.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"31 1","pages":"141 - 168"},"PeriodicalIF":2.7,"publicationDate":"2021-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1874221","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47877736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-03DOI: 10.1080/08993408.2021.1877987
Zarifa Zakaria, Jessica Vandenberg, Jennifer L. Tsan, D. Boulden, Collin Lynch, K. Boyer, E. Wiebe
ABSTRACT Background and Context Researchers and practitioners have begun to incorporate collaboration in programming because of its reported instructional and professional benefits. However, younger students need guidance on how to collaborate in environments that require substantial interpersonal interaction and negotiation. Previous research indicates that feedback fosters students’ productive collaboration. Objective This study employs an intervention to explore the role instructor-directed feedback plays on elementary students’ dyadic collaboration during 2-computer pair programming. Method We used a multi-study design, collecting video data on students’ dyadic collaboration. Study 1 qualitatively explored dyadic collaboration by coding video transcripts of four dyads which guided the design of Study 2 that examined conversation of six dyads using MANOVA and non-parametric tests. Findings Result from Study 2 showed that students receiving feedback used productive conversation categories significantly higher than the control condition in the sample group considered. Results are discussed in terms of group differences in specific conversation categories. Implications Our study highlights ways to support students in pair programming contexts so that they can maximize the benefits afforded through these experiences.
{"title":"Two-Computer Pair Programming: Exploring a Feedback Intervention to improve Collaborative Talk in Elementary Students","authors":"Zarifa Zakaria, Jessica Vandenberg, Jennifer L. Tsan, D. Boulden, Collin Lynch, K. Boyer, E. Wiebe","doi":"10.1080/08993408.2021.1877987","DOIUrl":"https://doi.org/10.1080/08993408.2021.1877987","url":null,"abstract":"ABSTRACT Background and Context Researchers and practitioners have begun to incorporate collaboration in programming because of its reported instructional and professional benefits. However, younger students need guidance on how to collaborate in environments that require substantial interpersonal interaction and negotiation. Previous research indicates that feedback fosters students’ productive collaboration. Objective This study employs an intervention to explore the role instructor-directed feedback plays on elementary students’ dyadic collaboration during 2-computer pair programming. Method We used a multi-study design, collecting video data on students’ dyadic collaboration. Study 1 qualitatively explored dyadic collaboration by coding video transcripts of four dyads which guided the design of Study 2 that examined conversation of six dyads using MANOVA and non-parametric tests. Findings Result from Study 2 showed that students receiving feedback used productive conversation categories significantly higher than the control condition in the sample group considered. Results are discussed in terms of group differences in specific conversation categories. Implications Our study highlights ways to support students in pair programming contexts so that they can maximize the benefits afforded through these experiences.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"32 1","pages":"3 - 29"},"PeriodicalIF":2.7,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1877987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44948658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-02DOI: 10.1080/08993408.2021.1874228
Michael Lachney, A. Bennett, R. Eglash, Aman Yadav, S. Moudgalya
ABSTRACT Background: As teachers work to broaden the participation of racially and ethnically underrepresented groups in computer science (CS), culturally responsive computing (CRC) becomes more pertinent to formal settings. Objective: Yet, equity-oriented literature offers limited guidance for developing deep forms of CRC in the classroom. In response, we support the claim that “it takes a village” to develop equity-oriented CS education but additively highlight the roles of cultural experts in the process. Methods: We use a case study methodology to explore one instance of this: a collaboration between a multi-racial team of researchers, a Black cosmetologist, and a White technology teacher. Findings: Three themes supported the CRC collaboration: multi-directional relationship building, iterative engagement with culture-computing, and collaborative implementation of a hybrid lesson. Implications: As opposed to orienting broadening participation around extractive metaphors like “pipelines,” our case study constructs the metaphor of an “open village” to orient CS education toward collaborations between schools and the communities they serve.
{"title":"Teaching in an open village: a case study on culturally responsive computing in compulsory education","authors":"Michael Lachney, A. Bennett, R. Eglash, Aman Yadav, S. Moudgalya","doi":"10.1080/08993408.2021.1874228","DOIUrl":"https://doi.org/10.1080/08993408.2021.1874228","url":null,"abstract":"ABSTRACT Background: As teachers work to broaden the participation of racially and ethnically underrepresented groups in computer science (CS), culturally responsive computing (CRC) becomes more pertinent to formal settings. Objective: Yet, equity-oriented literature offers limited guidance for developing deep forms of CRC in the classroom. In response, we support the claim that “it takes a village” to develop equity-oriented CS education but additively highlight the roles of cultural experts in the process. Methods: We use a case study methodology to explore one instance of this: a collaboration between a multi-racial team of researchers, a Black cosmetologist, and a White technology teacher. Findings: Three themes supported the CRC collaboration: multi-directional relationship building, iterative engagement with culture-computing, and collaborative implementation of a hybrid lesson. Implications: As opposed to orienting broadening participation around extractive metaphors like “pipelines,” our case study constructs the metaphor of an “open village” to orient CS education toward collaborations between schools and the communities they serve.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"31 1","pages":"462 - 488"},"PeriodicalIF":2.7,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1874228","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49073429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1080/08993408.2021.1877988
Jody Clarke, D. Silvis, Jessica F. Shumway, Victor R. Lee, Joseph S. Kozlowski
ABSTRACT Background and Context: There is a need for early childhood assessments of computational thinking (CT). However, there is not consensus on a guiding framework, definition, or set of proxies in which to measure CT. We are addressing this problem by using Evidence Centered Design (ECD) to develop an assessment of kindergarten-aged children’s CT. Objective: To present a design case on the development of the assessment, specifically the algorithmic thinking (AT) tasks and to share validity evidence that emerged. Method: We focus on the AT sub-component of CT and present the principled assessment design process using ECD. Findings: Our operationalization of CT includes spatial reasoning as a sub-component. Pilot results showed an acceptable internal consistency reliability for the AT items and critical design decisions that contributed to validity evidence. Implications: An important contribution of this work is the inclusion of spatial reasoning in our definition of early childhood CT.
{"title":"Developing a kindergarten computational thinking assessment using evidence-centered design: the case of algorithmic thinking","authors":"Jody Clarke, D. Silvis, Jessica F. Shumway, Victor R. Lee, Joseph S. Kozlowski","doi":"10.1080/08993408.2021.1877988","DOIUrl":"https://doi.org/10.1080/08993408.2021.1877988","url":null,"abstract":"ABSTRACT Background and Context: There is a need for early childhood assessments of computational thinking (CT). However, there is not consensus on a guiding framework, definition, or set of proxies in which to measure CT. We are addressing this problem by using Evidence Centered Design (ECD) to develop an assessment of kindergarten-aged children’s CT. Objective: To present a design case on the development of the assessment, specifically the algorithmic thinking (AT) tasks and to share validity evidence that emerged. Method: We focus on the AT sub-component of CT and present the principled assessment design process using ECD. Findings: Our operationalization of CT includes spatial reasoning as a sub-component. Pilot results showed an acceptable internal consistency reliability for the AT items and critical design decisions that contributed to validity evidence. Implications: An important contribution of this work is the inclusion of spatial reasoning in our definition of early childhood CT.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"31 1","pages":"117 - 140"},"PeriodicalIF":2.7,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1877988","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43696197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-31DOI: 10.1080/08993408.2020.1860408
Qiang Hao, David H. Smith IV, Lu Ding, Amy J. Ko, Camille Ottaway, Jack P. Wilson, Kai Arakawa, A. Turcan, Timothy Poehlman, Tyler Greer
ABSTRACT Background and Context automated feedback for programming assignments has great potential in promoting just-in-time learning, but there has been little work investigating the design of feedback in this context. Objective to investigate the impacts of different designs of automated feedback on student learning at a fine-grained level, and how students interacted with and perceived the feedback. Method a controlled quasi-experiment of 76 CS students, where students of each group received a different combination of three types of automated feedback for their programming assignments. Findings feedback addressing the gap between expected and actual outputs is critical to effective learning; feedback lacking enough details may lead to system gaming behaviors. Implications the design of feedback has substantial impacts on the efficacy of automated feedback for programming assignments; more research is needed to extend what is known about effective feedback design in this context.
{"title":"Towards understanding the effective design of automated formative feedback for programming assignments","authors":"Qiang Hao, David H. Smith IV, Lu Ding, Amy J. Ko, Camille Ottaway, Jack P. Wilson, Kai Arakawa, A. Turcan, Timothy Poehlman, Tyler Greer","doi":"10.1080/08993408.2020.1860408","DOIUrl":"https://doi.org/10.1080/08993408.2020.1860408","url":null,"abstract":"ABSTRACT Background and Context automated feedback for programming assignments has great potential in promoting just-in-time learning, but there has been little work investigating the design of feedback in this context. Objective to investigate the impacts of different designs of automated feedback on student learning at a fine-grained level, and how students interacted with and perceived the feedback. Method a controlled quasi-experiment of 76 CS students, where students of each group received a different combination of three types of automated feedback for their programming assignments. Findings feedback addressing the gap between expected and actual outputs is critical to effective learning; feedback lacking enough details may lead to system gaming behaviors. Implications the design of feedback has substantial impacts on the efficacy of automated feedback for programming assignments; more research is needed to extend what is known about effective feedback design in this context.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"32 1","pages":"105 - 127"},"PeriodicalIF":2.7,"publicationDate":"2021-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2020.1860408","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49092618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-18DOI: 10.1080/08993408.2021.1874229
David Weintrop, Shandra Morehouse, Mega M. Subramaniam
ABSTRACT Background and Context: Computational thinking (CT) is an essential skill for all youth to succeed in our technology and information-rich world. While CT has a growing presence within K-12 classrooms, libraries play an essential role in introducing these critical skills to all. Objective: Assessing learning in libraries is difficult given their informal nature. This is especially true when trying to assess a new and ill-defined construct like CT. A first step towards better supporting informal educators is to identify the motivations for bringing CT into informal spaces and to understand the current state of assessing CT in public libraries. Method: Thirty-seven interviews with library staff from across the United States were conducted and analyzed. Findings: This work reveals the variety of motivations justifying the inclusion of CT programming for youth in libraries, the challenges associated with assessing CT in libraries, and identifies the assessments library staff would like to be able to conduct for their own CT-related programming. Implications: This work advances our understanding of the current state of CT assessment in public libraries and lays the groundwork for future work seeking to meet the needs of those tasked with bringing CT to youth beyond the classroom.
{"title":"Assessing computational thinking in libraries","authors":"David Weintrop, Shandra Morehouse, Mega M. Subramaniam","doi":"10.1080/08993408.2021.1874229","DOIUrl":"https://doi.org/10.1080/08993408.2021.1874229","url":null,"abstract":"ABSTRACT Background and Context: Computational thinking (CT) is an essential skill for all youth to succeed in our technology and information-rich world. While CT has a growing presence within K-12 classrooms, libraries play an essential role in introducing these critical skills to all. Objective: Assessing learning in libraries is difficult given their informal nature. This is especially true when trying to assess a new and ill-defined construct like CT. A first step towards better supporting informal educators is to identify the motivations for bringing CT into informal spaces and to understand the current state of assessing CT in public libraries. Method: Thirty-seven interviews with library staff from across the United States were conducted and analyzed. Findings: This work reveals the variety of motivations justifying the inclusion of CT programming for youth in libraries, the challenges associated with assessing CT in libraries, and identifies the assessments library staff would like to be able to conduct for their own CT-related programming. Implications: This work advances our understanding of the current state of CT assessment in public libraries and lays the groundwork for future work seeking to meet the needs of those tasked with bringing CT to youth beyond the classroom.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":" 7","pages":"290 - 311"},"PeriodicalIF":2.7,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2021.1874229","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41254935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-14DOI: 10.1080/08993408.2020.1866933
D. Fields, Debora Lui, Y. Kafai, Gayithri Jayathirtha, Justice T. Walker, Mia S. Shaw
ABSTRACT Background and Context: While assessment of computational thinking concepts, practices, and perspectives is at the forefront of K-12 CS education, supporting student communication about computation has received relatively little attention. Objective: To examine the usability of process-based portfolios for capturing students’ communication about their computational practices regarding the process of making electronic textile projects. Method: We examined the portfolios of 248 high school students in 15 introductory CS classrooms from largely underserved communities, using a formal rubric (top-down) to code computational communication and an open-coding scheme (bottom-up) to identify computational practices described. Findings: Students demonstrated stronger abilities to communicate about computation using text than visuals. They also reported under-assessed CT practices like debugging, iterating, and collaborating. Students of experienced e-textile teachers performed substantially better than those with novice e-textile teachers. Implications: Portfolios provide a viable addition to traditional performance or survey assessments and meet a need to promote communication skills.
{"title":"Communicating about computational thinking: understanding affordances of portfolios for assessing high school students’ computational thinking and participation practices","authors":"D. Fields, Debora Lui, Y. Kafai, Gayithri Jayathirtha, Justice T. Walker, Mia S. Shaw","doi":"10.1080/08993408.2020.1866933","DOIUrl":"https://doi.org/10.1080/08993408.2020.1866933","url":null,"abstract":"ABSTRACT Background and Context: While assessment of computational thinking concepts, practices, and perspectives is at the forefront of K-12 CS education, supporting student communication about computation has received relatively little attention. Objective: To examine the usability of process-based portfolios for capturing students’ communication about their computational practices regarding the process of making electronic textile projects. Method: We examined the portfolios of 248 high school students in 15 introductory CS classrooms from largely underserved communities, using a formal rubric (top-down) to code computational communication and an open-coding scheme (bottom-up) to identify computational practices described. Findings: Students demonstrated stronger abilities to communicate about computation using text than visuals. They also reported under-assessed CT practices like debugging, iterating, and collaborating. Students of experienced e-textile teachers performed substantially better than those with novice e-textile teachers. Implications: Portfolios provide a viable addition to traditional performance or survey assessments and meet a need to promote communication skills.","PeriodicalId":45844,"journal":{"name":"Computer Science Education","volume":"31 1","pages":"224 - 258"},"PeriodicalIF":2.7,"publicationDate":"2021-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/08993408.2020.1866933","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43681150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}