Stealth assessment is a learning analytics method, which leverages the collection and analysis of learners' interaction data to make real-time inferences about their learning. Employed in digital learning environments, stealth assessment helps researchers, educators, and teachers evaluate learners' competencies and customize the learning experience to their specific needs. This adaptability is closely intertwined with theories related to learning, engagement, and motivation. The foundation of stealth assessment rests on evidence-cantered design (ECD), consisting of four core models: the Competency Model (CM), Evidence Model, Task Model, and Assembly Model.
The first step in designing a stealth assessment entails producing operational definitions of the constructs to be assessed. The CM establishes a framework of latent variables representing the target constructs, as well as their interrelations. When developing the CM, assessment designers must produce clear descriptions of the claims associated with the latent variables and their states, as well as sketch out how the competencies can be measured using assessment tasks. As the designers elaborate on the assessment model, the CM definitions need to be revisited to make sure they work with the scope and constraints of the assessment. Although this is the first step, problems at this stage may result in an assessment that does not meet the intended purpose. The objective of this paper is to elucidate the necessary steps for CM development and to highlight potential challenges in the process, along with strategies for addressing them, particularly for designers without much formal assessment experience.
This paper is a methodological exposition, showcasing five examples of CM development. Specifically, we conducted a qualitative retrospective analysis of the CM development procedure, wherein participants unfamiliar with ECD applied the framework and showcased their work. In a stealth assessment course, four groups of students (novice stealth assessment designers) engaged in developing stealth assessments for challenging-to-measure constructs across four distinct projects. During their CM development process, we observed various activities to pinpoint areas of difficulty.
This paper presents five illustrative examples, including one for assessing physics understanding and four for the development of CMs for four complex competencies: (1) systems thinking, (2) online information credibility evaluation, (3) computational thinking, and (4) collaborative creativity. Each example represents a case in CM development, offering valuable insights.
The paper concludes by discussing several guidelines derived from the examples discussed. Emphasizing the importance of dedicating ample time to fine-tune CMs can significantly enhance the accuracy of assessments related to learners' knowledge and skills. It underscores the significance of qualitative phases in crafting comprehensive stealth assessments, such as CMs, alongside the quantitative statistical modeling and technical aspects of these assessments.