{"title":"Design tensions in developing and using observation and assessment tools in makerspaces","authors":"Vishesh Kumar, Peter Wardrip, Rebecca Millerjohn","doi":"10.1007/s11423-023-10330-0","DOIUrl":null,"url":null,"abstract":"<p>Makerspaces, especially in their diverse proliferating forms, support a broad variety of learning outcomes. There is rich work in attempting to understand and describe these learning goals. Yet, there is a lack of support for practitioners and educators to assess the learning in events and programming at makerspaces (and similar environments) without extensive videorecording and documentation. In this paper, we present our design iterations at adapting the Tinkering Studio’s Learning Dimensions Framework (LDF) into tools usable by makerspace facilitators. These tools are intended to support recording observations, to inform the design of events they organize. Coupling an activity theory perspective (Cole and Engeström in The Cambridge handbook of sociocultural psychology. Cambridge University Press, Cambridge, 2007) with Tatar’s (2007) Design Tensions framework, we highlight key categories of considerations that emerge in creating and implementing such an assessment system, namely, tools, terminology, and practice. These interlinked categories foreground the following tensions which expand our considerations for the practice of assessment in makerspaces: supporting real-time, informative observation increases granularity of data collected, but also imposes a cost on facilitator attention; using a common assessment framework across different facilitators requires developing and establishing shared vocabulary and understanding; and tool-driven assessments need repeated adaptation and responsiveness to different facilitator practices. Additionally, this analysis also surfaces the learning for facilitators themselves in such a co-design process of creating and implementing tools to understand, recognize and assess learning experiences through the lenses of personal and shared values around productive learning.</p>","PeriodicalId":501584,"journal":{"name":"Educational Technology Research and Development","volume":"53 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Technology Research and Development","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11423-023-10330-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Makerspaces, especially in their diverse proliferating forms, support a broad variety of learning outcomes. There is rich work in attempting to understand and describe these learning goals. Yet, there is a lack of support for practitioners and educators to assess the learning in events and programming at makerspaces (and similar environments) without extensive videorecording and documentation. In this paper, we present our design iterations at adapting the Tinkering Studio’s Learning Dimensions Framework (LDF) into tools usable by makerspace facilitators. These tools are intended to support recording observations, to inform the design of events they organize. Coupling an activity theory perspective (Cole and Engeström in The Cambridge handbook of sociocultural psychology. Cambridge University Press, Cambridge, 2007) with Tatar’s (2007) Design Tensions framework, we highlight key categories of considerations that emerge in creating and implementing such an assessment system, namely, tools, terminology, and practice. These interlinked categories foreground the following tensions which expand our considerations for the practice of assessment in makerspaces: supporting real-time, informative observation increases granularity of data collected, but also imposes a cost on facilitator attention; using a common assessment framework across different facilitators requires developing and establishing shared vocabulary and understanding; and tool-driven assessments need repeated adaptation and responsiveness to different facilitator practices. Additionally, this analysis also surfaces the learning for facilitators themselves in such a co-design process of creating and implementing tools to understand, recognize and assess learning experiences through the lenses of personal and shared values around productive learning.