协调评价质量标准与能力本位高等教育的“双重评价”

Mary Tkatchov, Erin Hugus, Richard Barnes
{"title":"协调评价质量标准与能力本位高等教育的“双重评价”","authors":"Mary Tkatchov,&nbsp;Erin Hugus,&nbsp;Richard Barnes","doi":"10.1002/cbe2.1215","DOIUrl":null,"url":null,"abstract":"<p>High standards for assessment practices are essential in all institutions of learning. The role of assessment is arguably even more significant in competency-based education (CBE) institutions since credits and degrees are earned solely based on the demonstration of mastery of competencies through the assessments, and not, as in traditional schooling models, on an average that includes the accumulation of seat time (attendance) and points for activities that do not necessarily indicate competency (e.g., classwork, discussion participation) in addition to assessments.</p><p>CBE institutions making the claim that graduates are competent in stated competencies have a responsibility for making the quality of competency assessments a high priority in continual institutional improvement because “in CBE—unlike most traditional programs based on the credit hour—the institution must state with authority that its graduates have demonstrated the learning outcomes required for a degree” (Klein-Collins, <span>2013</span>, p.7), and “the value of CBE credentials hinges on the reliability and validity of those assessments” in determining graduates' competence (McClarty &amp; Gaertner, <span>2015</span>, p. 3).</p><p>There are commonly accepted standards and best practices for the assessment of learning that apply to all learning models in general as well as assessment concepts that may be specific to the CBE model. One aspect of CBE assessment “best practices,” which was evident in assessment policies and anecdotally in conversations with colleagues at various CBE institutions, was the concept of “double assessment.”</p><p>Similar to how the “double jeopardy” clause in the Fifth Amendment of the US Constitution prevents a criminal defendant from being prosecuted more than once for the same crime, a prohibition against “double assessment” in CBE means that once a student has been assessed and has successfully demonstrated mastery of a competency on an assessment, that student should not be assessed on that competency again. “Double assessment” only applies to <i>successful</i> demonstration of mastery of a competency—it does not prohibit or preclude multiple attempts of an assessment when students fail to meet competence on the assessment. Allowing students multiple attempts to pass a competency assessment is a central tenant of CBE.</p><p>In addition, “double assessment” is only in reference to summative assessment, which is “conducted to help determine whether a student has attained a certain level of competency” (National Research Council, <span>2001</span>, p. 40) or “to certify, report on, or evaluate learning” (Brookhart, McTighe, Stiggins, &amp; Wiliam, <span>2019</span>, p. 6). Using multiple types of formative assessment, or informal assessment that is used to monitor student progress and does not equate to a grade or credit, is common in higher education and viewed as best practice. There is, however, debate over whether using more than one summative assessment to assess students on the same content or learning outcomes is beneficial or whether it is unnecessary and may even inhibit student learning (Beagley &amp; Capaldi, <span>2016</span>; Domenech, Blazquez, de la Poza, &amp; Munoz-Miquel, <span>2015</span>; Lawrence, <span>2013</span>).</p><p>The origin of “double assessment” in CBE is difficult to investigate because virtually no literature exists that defines it and explains what it is and what it is not. Literature about assessment best practice in CBE does not specifically and directly address “double assessment”; however, there is some evidence in CBE literature that allows us to infer the purpose of avoiding “double assessment” in CBE programs. For example, a key quality principle that is central to CBE philosophy is that “students advance upon demonstrated mastery” (Sturgis &amp; Casey, <span>2018</span>, p. 7). Assessing students again on a previously mastered competency could possibly be considered committing “double assessment” because it is preventing students from moving on to a new competency and might be considered the equivalent of seat time or just another hoop to jump through.</p><p>Given that CBE is founded on the rejection of seat time as a basis for earning academic credit in exchange for a focus on demonstrated proficiency, CBE program designers strive to eliminate activities that do little to measure proficiency and essentially equate to seat time. To many professionals at CBE institutions, repetition of a competency assessment would not serve the purpose of ensuring mastery of knowledge and skills if mastery has already been demonstrated on an assessment; it would only serve to add time and cost to the students' learning journey. Redundancies in curriculum and assessment that may occur accidentally in traditional, credit- or time-based institutions should be avoided in programs that are intentionally designed around student mastery of distinct competencies (Klein-Collins, <span>2012</span>). Avoidance of “double assessment” in CBE, then, could be viewed as an effort to eliminate redundancy and reduce the cost of education for students and the institution.</p><p>Because “double assessment” is not well defined in literature, it can be interpreted in a variety of ways and perhaps misinterpreted, resulting in practices that hinder rather than promote high-quality competency assessment. For example, some have interpreted “double assessment” to mean that it is against assessment best practice to use more than one type of assessment to assess a single competency, even though using a variety of assessments and collecting multiple samples of evidence when drawing conclusions about students' knowledge are considered assessment best practices (Brookhart et al., <span>2019</span>; McMillan, <span>2018</span>; Suskie, <span>2018</span>). This belief about “double assessment” can result in the use of a single high-stakes assessment to award credit for a competency or even a course when a combination of assessments might actually be needed to draw valid inferences about a particular competency.</p><p>The following scenario provides a situation in which more than one form of assessment is desired to draw valid inferences about student proficiency, but in which a misinterpretation of “double assessment” might prevent the best assessment strategy from being used.</p><p>According to the book <i>Assessing Student Learning: A Common Sense Guide</i> (3rd edition), an assessment is considered good quality “only if it is the right assessment for the learning goals you want to assess and the decisions you want the resulting evidence to inform” (Suskie, <span>2018</span>, p. 23). The problem is that any one type of assessment has limitations and in many cases might not be entirely, on its own, the right assessment to provide the needed evidence to “certify” competence (National Research Council, <span>2001</span>; Suskie, <span>2018</span>). What if it is determined by experts working on a course that a combination of assessment types is actually needed to obtain the evidence necessary for making valid inferences about student mastery of a competency? “Using a variety of assessments … lets us infer more confidently how well students have achieved key learning goals” (Suskie, <span>2018</span>, p.28).</p><p>Although there are many assessment formats, this paper will focus on two main forms of assessment, selected-response assessment and performance assessment, to compare their benefits and weaknesses.</p><p><i>Selected-response</i> assessments such as multiple-choice, in which students select a correct answer to questions from provided choices, are commonly used because they are objective, and they have the advantage of being auto-graded, which makes them affordable and scalable since they do not require significant faculty time compared to performance assessments. They are also able to provide immediate, automatic feedback about students' performance and, when meaningful feedback is provided, can point students to the areas of the content in which they need remediation. In addition to selected-response assessments' practical advantages, a strategic advantage is that they “do a good job of assessing subject matter and procedural knowledge, and simple understanding, particularly when students must recognize or remember isolated facts, definitions, spellings, concepts, and principles” (McMillan, <span>2018</span>, p. 77).</p><p>Selected-response assessments are, however, limited in their ability to measure a multitude of skills and abilities such as logical reasoning, critical thinking, ethical decision-making, interpersonal “soft” skills, and written communication, just to list a few. Higher education institutions are placing a greater emphasis on assessment activities that promote lifelong learning skills and allow students to tie their learning to real-world problems and contexts so that they can see how their learning will live beyond the classroom (Davidson, <span>2017</span>).</p><p>Specifically in competency-based education, in which there is an inherent focus on application of knowledge in real-world contexts, “a multiple-choice, standardized test is likely inadequate to assess most competencies. Instead, what are required are assignments that present tasks or situations that students will encounter in life and in the workplace” (Klein-Collins, <span>2013</span>, p.7). A promise of competency-based education is that students will leave the university more competent to enter the workforce because they are required to demonstrate mastery to earn a credential. Objective assessments do not provide students with the opportunity to leave with artifacts showing marketable skills that can be provided to employers as evidence of their competence, nor do they generally give students the opportunity to practice applying skills in a real-world context. They do, however, have some authentic value in programs when students need to pass assessments in a similar format for licensure or certification after graduation, as in teaching, nursing, and accounting, or to gain admittance into graduate school.</p><p>Performance assessment, or “open-ended tasks that call upon students to apply their knowledge and skills to create a product or solve a problem” (National Research Council, <span>2001</span>, p. 29), is seemingly the preferred method of assessment in CBE. There is a big push in CBE institutions to use “authentic” performance assessment as reflected in the Quality Framework for Competency-Based Education Programs (<span>2017</span>) from the Competency-Based Education Network (C-BEN): “Authentic assessments and their corresponding rubrics are key components of CBE, which is anchored by the belief that progress toward a credential should be determined by what learners know and are able to do” (p. 17). But performance assessments also have limitations, such as subjectivity in the evaluation of student performance, and overuse can be taxing for students and faculty who must evaluate the assessments. “Because performance assessments are time intensive for teachers and students, they are usually not the best choice for assessing vast amounts of knowledge” (McMillan, <span>2018</span>, p. 77). A wide range of facts and terminology, which would easily be assessed using an objective assessment, would not practically or authentically be assessed in a performance task.</p><p>This uncertainty is one reason why multiple samples of evidence are preferred to a single assessment for making inferences about student learning.</p><p>Standards that are specific to CBE also promote multiple forms of assessment, such as this example from iNACOL's <i>Quality Principles for Competency-Based Education:</i> “Students are empowered and engaged when the process of assessing learning is transparent, timely, draws upon <i>multiple sources of evidence</i> [emphasis added] and communicates progress” (Sturgis &amp; Case y, <span>2018</span>, p.17). C-BEN includes in its Quality Framework for Competency-Based Education Programs (<span>2017</span>) that CBE models “use a range of assessment types and modalities to measure the transfer of learning and mastery into varied contexts” and that assessments are “designed to provide learners with multiple opportunities and ways to demonstrate competency, including measures for both learning and the ability to apply (or transfer) that learning in novel settings and situations” (p.17). Neither of these sources of CBE assessment best practices state that only one method of assessment can be used to assess a competency.</p><p>Figures 1 and 2 below illustrate the concept of assessment instances as “snapshots of behavior” from which educators make estimates or inferences that are “bound to be at least somewhat inaccurate” (Suskie, <span>2018</span>, p. 28).</p><p>During a conference session, the authors presented the photograph in Figure 1 to illustrate how assessment “snapshots” provide limited information (Tkatchov &amp; Hugus, <span>2019</span>). Participants were asked to make a judgment about where the photographer was located when taking the photograph in Figure 1. Responses included “in an airplane” and “in a field outside.”</p><p>[Correction added on July 11, 2020, after first online publication: The blinded text has been replaced with the reference citation (Tkatchov &amp; Hugus, 2019).] Next, participants were shown a second photograph (Figure 2) that provides additional information. With new information from a different angle, participants were asked to make a judgment as to where the photographer was when taking the photograph. With new information to include evidence that situated the photographer inside a building, responses changed to “in a house or building looking out a window.” Having more information, or a second snapshot to complement the first one, allowed the participants to make a more accurate judgment as to where the photographer was when taking the photograph. The second photograph provided more breadth to give the viewer a better snapshot of the photographer's location, but it loses some depth and detail of the clouds that was apparent in the first photo.</p><p>As illustrated with the two photographs, a variety of assessment formats, such as a selected response in combination with a performance assessment, can be combined to complement each other or to supplement each other's deficiencies. They can also be used to address the different cognitive levels that are represented in a competency. “At lower levels of competence, multiple-choice and other tests of objective learning may be appropriate. At higher levels of competence, however, getting at more complex and analytical thinking requires different kinds of assessment such as student narratives, demonstrations, simulations, or performance-based assignments” (Klein-Collins, <span>2013</span>, p. 12). A complementary assessment strategy for competencies that encompass theory and application will capture the breadth of knowledge (the recall level) as well as the depth of knowledge (the application level), and more than one assessment format might be combined to accomplish a more complete picture of learners' competency.</p><p>Despite the stated inadequacy of using only multiple-choice assessments for measuring most competencies, multiple-choice assessments are often the preferred assessment format because they are scalable at low cost. Once a multiple-choice assessment is developed, that assessment can be used to assess tens of thousands or, even, hundreds of thousands of students with very small incremental variable cost. Most of the cost of multiple-choice assessment is in the front-end development of the assessment itself. When contrasted against the high cost of evaluating task-based or performance assessments, this significant difference in per-student evaluation cost can bias budget-conscious institutions toward multiple-choice assessment.</p><p>Institutions must balance their own assessment cost considerations with how those considerations impact their students. Not all decisions that increase assessment cost are bad for students, and increases in the cost of assessment do not have to be passed on to students. Cost-conscious institutions can frequently lever cost reductions in other areas and maintain existing cost levels. Just because a decision is cost-conscious for the institution, if it degrades the quality of student learning or the quality of student assessment—increasing the likelihood of inaccurate assumptions regarding students' competence—then that cost-conscious decision represents a disservice to students, to the institution, and to all competency-based education. The proper balance of cost and assessment quality is a CBE institution's ethical responsibility. Failing to properly balance this ethical responsibility runs the risk that employers will lose confidence in competency-based degrees and credentials and be reluctant to hire CBE graduates.</p><p>A complementary assessment strategy that combines assessment formats might be necessary to make a valid judgment about the students' competency in conversing about the weather in French. Students would need to demonstrate their ability to have a conversation about the weather in French in a performance assessment task, especially to capture correct pronunciation (outcome 3) and the ability to give appropriate responses to questions and comments (outcome 4). The performance assessment would allow for the students to demonstrate depth of knowledge at the application level, but it would not be practical to expect the students to perform, or the faculty to evaluate, conversations in every conceivable weather-related situation. To assess the range of vocabulary and make an inference about students' ability to transfer their learning in a variety of situations, a selected-response, objective assessment might be used to capture the breadth of students' knowledge of weather-related vocabulary (outcome 1) and correct sentence structure (outcome 2) at the recall level.</p><p>In this case, at least two assessment formats would be used to create a single assessment strategy to assess a competency. Half of the competency would be assessed at the lower level but over a broad range of content, while the other half would be assessed at a higher level of application but over a narrower range of content. This strategy would be more scalable than giving the students a multitude of performance assessments, but it would be more dependable than relying on only one assessment.</p><p>Redundancy in assessment adds unnecessary time to degree completion, which also increases the cost of tuition. In addition, assessment practices that are overly burdensome for faculty can also place too much of a financial burden on an institution and, ultimately, the students. Therefore, CBE institutions are wise to avoid redundancy in assessment whenever possible and to prioritize the scalability of assessments. However, there must be a balance in competency-based higher education between minimizing the cost of education and maintaining high-quality assessment practices that give employers confidence in the legitimacy of competency-based credentials.</p><p>When literature resoundingly supports the use of multiple forms of assessment as a best practice, restricting assessment practices in CBE to only one form of assessment based on an ill-defined and unsubstantiated “double assessment” rule can have the opposite of its intended effect. It can reduce, not enhance, the quality of competency assessments and the validity of inferences about student learning derived from those assessments.</p><p>More discussion is needed among the CBE community about how higher education institutions can deliver on CBE's promise of providing an affordable and <i>high-quality</i> education that prepares students for life and work after graduation. Policies intended to manage cost and scalability in assessment should be counterbalanced by safeguards intended to ensure quality, such as allowances for exceptions when competencies call for a greater investment in assessment or are best assessed through multiple modalities. Further research and experimentation in scalable performance assessments and hybrid assessments that combine formats are important next steps in CBE innovation.</p>","PeriodicalId":101234,"journal":{"name":"The Journal of Competency-Based Education","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/cbe2.1215","citationCount":"0","resultStr":"{\"title\":\"Reconciling assessment quality standards and “double assessment” in competency-based higher education\",\"authors\":\"Mary Tkatchov,&nbsp;Erin Hugus,&nbsp;Richard Barnes\",\"doi\":\"10.1002/cbe2.1215\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>High standards for assessment practices are essential in all institutions of learning. The role of assessment is arguably even more significant in competency-based education (CBE) institutions since credits and degrees are earned solely based on the demonstration of mastery of competencies through the assessments, and not, as in traditional schooling models, on an average that includes the accumulation of seat time (attendance) and points for activities that do not necessarily indicate competency (e.g., classwork, discussion participation) in addition to assessments.</p><p>CBE institutions making the claim that graduates are competent in stated competencies have a responsibility for making the quality of competency assessments a high priority in continual institutional improvement because “in CBE—unlike most traditional programs based on the credit hour—the institution must state with authority that its graduates have demonstrated the learning outcomes required for a degree” (Klein-Collins, <span>2013</span>, p.7), and “the value of CBE credentials hinges on the reliability and validity of those assessments” in determining graduates' competence (McClarty &amp; Gaertner, <span>2015</span>, p. 3).</p><p>There are commonly accepted standards and best practices for the assessment of learning that apply to all learning models in general as well as assessment concepts that may be specific to the CBE model. One aspect of CBE assessment “best practices,” which was evident in assessment policies and anecdotally in conversations with colleagues at various CBE institutions, was the concept of “double assessment.”</p><p>Similar to how the “double jeopardy” clause in the Fifth Amendment of the US Constitution prevents a criminal defendant from being prosecuted more than once for the same crime, a prohibition against “double assessment” in CBE means that once a student has been assessed and has successfully demonstrated mastery of a competency on an assessment, that student should not be assessed on that competency again. “Double assessment” only applies to <i>successful</i> demonstration of mastery of a competency—it does not prohibit or preclude multiple attempts of an assessment when students fail to meet competence on the assessment. Allowing students multiple attempts to pass a competency assessment is a central tenant of CBE.</p><p>In addition, “double assessment” is only in reference to summative assessment, which is “conducted to help determine whether a student has attained a certain level of competency” (National Research Council, <span>2001</span>, p. 40) or “to certify, report on, or evaluate learning” (Brookhart, McTighe, Stiggins, &amp; Wiliam, <span>2019</span>, p. 6). Using multiple types of formative assessment, or informal assessment that is used to monitor student progress and does not equate to a grade or credit, is common in higher education and viewed as best practice. There is, however, debate over whether using more than one summative assessment to assess students on the same content or learning outcomes is beneficial or whether it is unnecessary and may even inhibit student learning (Beagley &amp; Capaldi, <span>2016</span>; Domenech, Blazquez, de la Poza, &amp; Munoz-Miquel, <span>2015</span>; Lawrence, <span>2013</span>).</p><p>The origin of “double assessment” in CBE is difficult to investigate because virtually no literature exists that defines it and explains what it is and what it is not. Literature about assessment best practice in CBE does not specifically and directly address “double assessment”; however, there is some evidence in CBE literature that allows us to infer the purpose of avoiding “double assessment” in CBE programs. For example, a key quality principle that is central to CBE philosophy is that “students advance upon demonstrated mastery” (Sturgis &amp; Casey, <span>2018</span>, p. 7). Assessing students again on a previously mastered competency could possibly be considered committing “double assessment” because it is preventing students from moving on to a new competency and might be considered the equivalent of seat time or just another hoop to jump through.</p><p>Given that CBE is founded on the rejection of seat time as a basis for earning academic credit in exchange for a focus on demonstrated proficiency, CBE program designers strive to eliminate activities that do little to measure proficiency and essentially equate to seat time. To many professionals at CBE institutions, repetition of a competency assessment would not serve the purpose of ensuring mastery of knowledge and skills if mastery has already been demonstrated on an assessment; it would only serve to add time and cost to the students' learning journey. Redundancies in curriculum and assessment that may occur accidentally in traditional, credit- or time-based institutions should be avoided in programs that are intentionally designed around student mastery of distinct competencies (Klein-Collins, <span>2012</span>). Avoidance of “double assessment” in CBE, then, could be viewed as an effort to eliminate redundancy and reduce the cost of education for students and the institution.</p><p>Because “double assessment” is not well defined in literature, it can be interpreted in a variety of ways and perhaps misinterpreted, resulting in practices that hinder rather than promote high-quality competency assessment. For example, some have interpreted “double assessment” to mean that it is against assessment best practice to use more than one type of assessment to assess a single competency, even though using a variety of assessments and collecting multiple samples of evidence when drawing conclusions about students' knowledge are considered assessment best practices (Brookhart et al., <span>2019</span>; McMillan, <span>2018</span>; Suskie, <span>2018</span>). This belief about “double assessment” can result in the use of a single high-stakes assessment to award credit for a competency or even a course when a combination of assessments might actually be needed to draw valid inferences about a particular competency.</p><p>The following scenario provides a situation in which more than one form of assessment is desired to draw valid inferences about student proficiency, but in which a misinterpretation of “double assessment” might prevent the best assessment strategy from being used.</p><p>According to the book <i>Assessing Student Learning: A Common Sense Guide</i> (3rd edition), an assessment is considered good quality “only if it is the right assessment for the learning goals you want to assess and the decisions you want the resulting evidence to inform” (Suskie, <span>2018</span>, p. 23). The problem is that any one type of assessment has limitations and in many cases might not be entirely, on its own, the right assessment to provide the needed evidence to “certify” competence (National Research Council, <span>2001</span>; Suskie, <span>2018</span>). What if it is determined by experts working on a course that a combination of assessment types is actually needed to obtain the evidence necessary for making valid inferences about student mastery of a competency? “Using a variety of assessments … lets us infer more confidently how well students have achieved key learning goals” (Suskie, <span>2018</span>, p.28).</p><p>Although there are many assessment formats, this paper will focus on two main forms of assessment, selected-response assessment and performance assessment, to compare their benefits and weaknesses.</p><p><i>Selected-response</i> assessments such as multiple-choice, in which students select a correct answer to questions from provided choices, are commonly used because they are objective, and they have the advantage of being auto-graded, which makes them affordable and scalable since they do not require significant faculty time compared to performance assessments. They are also able to provide immediate, automatic feedback about students' performance and, when meaningful feedback is provided, can point students to the areas of the content in which they need remediation. In addition to selected-response assessments' practical advantages, a strategic advantage is that they “do a good job of assessing subject matter and procedural knowledge, and simple understanding, particularly when students must recognize or remember isolated facts, definitions, spellings, concepts, and principles” (McMillan, <span>2018</span>, p. 77).</p><p>Selected-response assessments are, however, limited in their ability to measure a multitude of skills and abilities such as logical reasoning, critical thinking, ethical decision-making, interpersonal “soft” skills, and written communication, just to list a few. Higher education institutions are placing a greater emphasis on assessment activities that promote lifelong learning skills and allow students to tie their learning to real-world problems and contexts so that they can see how their learning will live beyond the classroom (Davidson, <span>2017</span>).</p><p>Specifically in competency-based education, in which there is an inherent focus on application of knowledge in real-world contexts, “a multiple-choice, standardized test is likely inadequate to assess most competencies. Instead, what are required are assignments that present tasks or situations that students will encounter in life and in the workplace” (Klein-Collins, <span>2013</span>, p.7). A promise of competency-based education is that students will leave the university more competent to enter the workforce because they are required to demonstrate mastery to earn a credential. Objective assessments do not provide students with the opportunity to leave with artifacts showing marketable skills that can be provided to employers as evidence of their competence, nor do they generally give students the opportunity to practice applying skills in a real-world context. They do, however, have some authentic value in programs when students need to pass assessments in a similar format for licensure or certification after graduation, as in teaching, nursing, and accounting, or to gain admittance into graduate school.</p><p>Performance assessment, or “open-ended tasks that call upon students to apply their knowledge and skills to create a product or solve a problem” (National Research Council, <span>2001</span>, p. 29), is seemingly the preferred method of assessment in CBE. There is a big push in CBE institutions to use “authentic” performance assessment as reflected in the Quality Framework for Competency-Based Education Programs (<span>2017</span>) from the Competency-Based Education Network (C-BEN): “Authentic assessments and their corresponding rubrics are key components of CBE, which is anchored by the belief that progress toward a credential should be determined by what learners know and are able to do” (p. 17). But performance assessments also have limitations, such as subjectivity in the evaluation of student performance, and overuse can be taxing for students and faculty who must evaluate the assessments. “Because performance assessments are time intensive for teachers and students, they are usually not the best choice for assessing vast amounts of knowledge” (McMillan, <span>2018</span>, p. 77). A wide range of facts and terminology, which would easily be assessed using an objective assessment, would not practically or authentically be assessed in a performance task.</p><p>This uncertainty is one reason why multiple samples of evidence are preferred to a single assessment for making inferences about student learning.</p><p>Standards that are specific to CBE also promote multiple forms of assessment, such as this example from iNACOL's <i>Quality Principles for Competency-Based Education:</i> “Students are empowered and engaged when the process of assessing learning is transparent, timely, draws upon <i>multiple sources of evidence</i> [emphasis added] and communicates progress” (Sturgis &amp; Case y, <span>2018</span>, p.17). C-BEN includes in its Quality Framework for Competency-Based Education Programs (<span>2017</span>) that CBE models “use a range of assessment types and modalities to measure the transfer of learning and mastery into varied contexts” and that assessments are “designed to provide learners with multiple opportunities and ways to demonstrate competency, including measures for both learning and the ability to apply (or transfer) that learning in novel settings and situations” (p.17). Neither of these sources of CBE assessment best practices state that only one method of assessment can be used to assess a competency.</p><p>Figures 1 and 2 below illustrate the concept of assessment instances as “snapshots of behavior” from which educators make estimates or inferences that are “bound to be at least somewhat inaccurate” (Suskie, <span>2018</span>, p. 28).</p><p>During a conference session, the authors presented the photograph in Figure 1 to illustrate how assessment “snapshots” provide limited information (Tkatchov &amp; Hugus, <span>2019</span>). Participants were asked to make a judgment about where the photographer was located when taking the photograph in Figure 1. Responses included “in an airplane” and “in a field outside.”</p><p>[Correction added on July 11, 2020, after first online publication: The blinded text has been replaced with the reference citation (Tkatchov &amp; Hugus, 2019).] Next, participants were shown a second photograph (Figure 2) that provides additional information. With new information from a different angle, participants were asked to make a judgment as to where the photographer was when taking the photograph. With new information to include evidence that situated the photographer inside a building, responses changed to “in a house or building looking out a window.” Having more information, or a second snapshot to complement the first one, allowed the participants to make a more accurate judgment as to where the photographer was when taking the photograph. The second photograph provided more breadth to give the viewer a better snapshot of the photographer's location, but it loses some depth and detail of the clouds that was apparent in the first photo.</p><p>As illustrated with the two photographs, a variety of assessment formats, such as a selected response in combination with a performance assessment, can be combined to complement each other or to supplement each other's deficiencies. They can also be used to address the different cognitive levels that are represented in a competency. “At lower levels of competence, multiple-choice and other tests of objective learning may be appropriate. At higher levels of competence, however, getting at more complex and analytical thinking requires different kinds of assessment such as student narratives, demonstrations, simulations, or performance-based assignments” (Klein-Collins, <span>2013</span>, p. 12). A complementary assessment strategy for competencies that encompass theory and application will capture the breadth of knowledge (the recall level) as well as the depth of knowledge (the application level), and more than one assessment format might be combined to accomplish a more complete picture of learners' competency.</p><p>Despite the stated inadequacy of using only multiple-choice assessments for measuring most competencies, multiple-choice assessments are often the preferred assessment format because they are scalable at low cost. Once a multiple-choice assessment is developed, that assessment can be used to assess tens of thousands or, even, hundreds of thousands of students with very small incremental variable cost. Most of the cost of multiple-choice assessment is in the front-end development of the assessment itself. When contrasted against the high cost of evaluating task-based or performance assessments, this significant difference in per-student evaluation cost can bias budget-conscious institutions toward multiple-choice assessment.</p><p>Institutions must balance their own assessment cost considerations with how those considerations impact their students. Not all decisions that increase assessment cost are bad for students, and increases in the cost of assessment do not have to be passed on to students. Cost-conscious institutions can frequently lever cost reductions in other areas and maintain existing cost levels. Just because a decision is cost-conscious for the institution, if it degrades the quality of student learning or the quality of student assessment—increasing the likelihood of inaccurate assumptions regarding students' competence—then that cost-conscious decision represents a disservice to students, to the institution, and to all competency-based education. The proper balance of cost and assessment quality is a CBE institution's ethical responsibility. Failing to properly balance this ethical responsibility runs the risk that employers will lose confidence in competency-based degrees and credentials and be reluctant to hire CBE graduates.</p><p>A complementary assessment strategy that combines assessment formats might be necessary to make a valid judgment about the students' competency in conversing about the weather in French. Students would need to demonstrate their ability to have a conversation about the weather in French in a performance assessment task, especially to capture correct pronunciation (outcome 3) and the ability to give appropriate responses to questions and comments (outcome 4). The performance assessment would allow for the students to demonstrate depth of knowledge at the application level, but it would not be practical to expect the students to perform, or the faculty to evaluate, conversations in every conceivable weather-related situation. To assess the range of vocabulary and make an inference about students' ability to transfer their learning in a variety of situations, a selected-response, objective assessment might be used to capture the breadth of students' knowledge of weather-related vocabulary (outcome 1) and correct sentence structure (outcome 2) at the recall level.</p><p>In this case, at least two assessment formats would be used to create a single assessment strategy to assess a competency. Half of the competency would be assessed at the lower level but over a broad range of content, while the other half would be assessed at a higher level of application but over a narrower range of content. This strategy would be more scalable than giving the students a multitude of performance assessments, but it would be more dependable than relying on only one assessment.</p><p>Redundancy in assessment adds unnecessary time to degree completion, which also increases the cost of tuition. In addition, assessment practices that are overly burdensome for faculty can also place too much of a financial burden on an institution and, ultimately, the students. Therefore, CBE institutions are wise to avoid redundancy in assessment whenever possible and to prioritize the scalability of assessments. However, there must be a balance in competency-based higher education between minimizing the cost of education and maintaining high-quality assessment practices that give employers confidence in the legitimacy of competency-based credentials.</p><p>When literature resoundingly supports the use of multiple forms of assessment as a best practice, restricting assessment practices in CBE to only one form of assessment based on an ill-defined and unsubstantiated “double assessment” rule can have the opposite of its intended effect. It can reduce, not enhance, the quality of competency assessments and the validity of inferences about student learning derived from those assessments.</p><p>More discussion is needed among the CBE community about how higher education institutions can deliver on CBE's promise of providing an affordable and <i>high-quality</i> education that prepares students for life and work after graduation. Policies intended to manage cost and scalability in assessment should be counterbalanced by safeguards intended to ensure quality, such as allowances for exceptions when competencies call for a greater investment in assessment or are best assessed through multiple modalities. Further research and experimentation in scalable performance assessments and hybrid assessments that combine formats are important next steps in CBE innovation.</p>\",\"PeriodicalId\":101234,\"journal\":{\"name\":\"The Journal of Competency-Based Education\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1002/cbe2.1215\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Competency-Based Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cbe2.1215\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Competency-Based Education","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cbe2.1215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在所有的学习机构中,评估实践的高标准是必不可少的。在以能力为基础的教育(CBE)机构中,评估的作用可以说是更重要的,因为学分和学位的获得完全基于通过评估对能力的掌握,而不是像传统的学校模式那样,除了评估之外,平均包括坐席时间(出勤)的积累和不一定表明能力的活动(例如,课堂作业,讨论参与)的分数。声称毕业生具备所述能力的CBE机构有责任在持续的机构改进中优先考虑能力评估的质量,因为“与大多数基于学分的传统项目不同,在CBE中,机构必须权威地声明其毕业生已经证明了学位所需的学习成果”(Klein-Collins, 2013, p.7)。在决定毕业生能力时,“CBE证书的价值取决于这些评估的可靠性和有效性”(麦克拉蒂&;Gaertner, 2015, p. 3)有一些普遍接受的学习评估标准和最佳实践,适用于一般的所有学习模型,以及可能特定于CBE模型的评估概念。CBE评估“最佳实践”的一个方面是“双重评估”的概念,这在评估政策和与各种CBE机构的同事的谈话中都很明显。类似于美国宪法第五修正案中的“双重审判”条款禁止刑事被告因同一罪行被多次起诉,CBE中禁止“双重评估”意味着一旦学生被评估并成功地证明掌握了一项能力,该学生就不应该再被评估该能力。“双重评估”只适用于成功地展示了对某项能力的掌握——它并不禁止或排除学生在评估中未能达到能力要求时多次尝试评估。允许学生多次尝试通过能力评估是CBE的核心内容。此外,“双重评估”仅指总结性评估,即“帮助确定学生是否达到了一定的能力水平”(国家研究委员会,2001年,第40页)或“证明、报告或评估学习”(Brookhart, McTighe, Stiggins, &威廉,2019,第6页)。使用多种类型的形成性评估,或用于监控学生进步的非正式评估,不等同于成绩或学分,在高等教育中很常见,被视为最佳实践。然而,对于使用多个总结性评估来评估学生对相同的内容或学习成果是否有益,或者是否不必要,甚至可能抑制学生的学习,存在争议(Beagley &Capaldi, 2016;多梅内克、布拉斯克斯、德拉波扎等;Munoz-Miquel, 2015;劳伦斯,2013)。CBE中“双重评估”的起源很难调查,因为实际上没有文献定义它并解释它是什么和不是什么。关于CBE评估最佳实践的文献没有具体和直接地解决“双重评估”;然而,在CBE文献中有一些证据可以让我们推断出在CBE项目中避免“双重评估”的目的。例如,CBE哲学核心的一个关键质量原则是“学生在证明掌握的基础上进步”(斯特吉斯& &;Casey, 2018,第7页)。再次评估学生先前掌握的能力可能被认为是在进行“双重评估”,因为它阻碍了学生继续学习新的能力,可能被认为相当于坐着的时间,或者只是另一个需要跳过的圈子。考虑到CBE是建立在拒绝座位时间作为获得学术学分的基础上,以换取对证明的熟练程度的关注,CBE计划的设计者努力消除那些对衡量熟练程度几乎没有作用的活动,而基本上等同于座位时间。对许多专业人士而言,如果能力评估已显示他们掌握了知识和技能,重复进行能力评估并不能确保他们掌握知识和技能;这只会增加学生学习的时间和成本。在传统的、以学分或时间为基础的机构中,课程和评估中可能偶然出现的冗余,应该在有意围绕学生掌握不同能力而设计的课程中避免(Klein-Collins, 2012)。 因此,CBE避免“双重评估”可被视为消除冗余、降低学生和院校教育成本的一种努力。因为“双重评估”在文献中没有很好的定义,它可以被以各种方式解释,也许被误解,从而导致阻碍而不是促进高质量能力评估的实践。例如,有些人将“双重评估”解释为使用多种类型的评估来评估单一能力是违反评估最佳实践的,尽管在得出关于学生知识的结论时使用各种评估并收集多个证据样本被认为是评估最佳实践(Brookhart等人,2019;麦克米兰,2018;Suskie, 2018)。这种关于“双重评估”的信念可能导致使用单一的高风险评估来授予能力甚至课程的学分,而实际上可能需要组合评估来得出关于特定能力的有效推断。下面的场景提供了一种情况,在这种情况下,需要多种形式的评估来得出关于学生熟练程度的有效推断,但是在这种情况下,对“双重评估”的误解可能会阻止使用最佳评估策略。根据《评估学生学习:常识指南》(第三版)一书,评估被认为是高质量的,“只有当它是对你想要评估的学习目标的正确评估,以及你想要得到的证据来告知的决定”(Suskie, 2018,第23页)。问题是,任何一种评估都有局限性,在许多情况下,就其本身而言,可能并不完全是提供“证明”能力所需证据的正确评估(国家研究委员会,2001年;Suskie, 2018)。如果在课程中工作的专家确定,实际上需要组合评估类型来获得必要的证据,以对学生掌握一项能力做出有效推断,该怎么办?“使用各种评估……让我们更自信地推断学生在实现关键学习目标方面的表现”(Suskie, 2018, p.28)。虽然有许多评估形式,但本文将重点关注两种主要的评估形式,即选择反应评估和绩效评估,以比较它们的优缺点。选择式回答评估,如选择题,学生从提供的选项中选择一个正确的答案,通常被使用,因为它们是客观的,而且它们具有自动评分的优势,这使得它们负担得起和可扩展,因为与绩效评估相比,它们不需要大量的教师时间。他们还能够提供关于学生表现的即时、自动反馈,当提供有意义的反馈时,可以指出学生需要补习的内容区域。除了选择反应评估的实际优势之外,一个战略优势是它们“在评估主题和程序知识以及简单理解方面做得很好,特别是当学生必须认识或记住孤立的事实、定义、拼写、概念和原则时”(McMillan, 2018, p. 77)。然而,选择反应评估在衡量多种技能和能力方面的能力有限,比如逻辑推理、批判性思维、道德决策、人际“软”技能和书面沟通,这只是其中的一些。高等教育机构更加重视促进终身学习技能的评估活动,并允许学生将他们的学习与现实世界的问题和背景联系起来,以便他们能够看到他们的学习将如何在课堂之外发挥作用(Davidson, 2017)。特别是在以能力为基础的教育中,这种教育固有的重点是在现实环境中应用知识,“一个选择题的标准化测试可能不足以评估大多数能力。”相反,所需要的是呈现学生在生活和工作场所中遇到的任务或情况的作业”(Klein-Collins, 2013, p.7)。以能力为基础的教育的一个承诺是,学生离开大学后将更有能力进入劳动力市场,因为他们需要证明自己的精通才能获得证书。客观的评估并没有给学生提供机会,让他们在离开的时候带着能够向雇主证明其能力的、具有市场价值的技能,也没有给学生提供在现实环境中实践应用技能的机会。然而,当学生在毕业后需要通过类似形式的评估以获得执照或证书时,如在教学、护理和会计方面,或者为了进入研究生院,这些课程确实有一些真正的价值。 绩效评估,或“要求学生运用他们的知识和技能来创造产品或解决问题的开放式任务”(国家研究委员会,2001年,第29页),似乎是CBE中首选的评估方法。CBE机构大力推动使用“真实的”绩效评估,这反映在能力基础教育网络(C-BEN)的能力基础教育项目质量框架(2017年)中:“真实的评估及其相应的规则是CBE的关键组成部分,它基于这样一种信念,即获得证书的进展应该由学习者知道什么和能够做什么来决定”(第17页)。但绩效评估也有局限性,比如评估学生表现的主观性,过度使用可能会给必须评估这些评估的学生和教师带来负担。“因为绩效评估对教师和学生来说都是时间密集型的,所以它们通常不是评估大量知识的最佳选择”(McMillan, 2018, p. 77)。各种各样的事实和术语可以很容易地用客观评价加以评价,但在执行任务中却不能实际或真实地加以评价。这种不确定性是为什么在对学生学习进行推断时更倾向于使用多个证据样本而不是单一评估的原因之一。CBE特有的标准也促进了多种形式的评估,例如iNACOL的《能力为本的教育质量原则》中的这个例子:“当评估学习的过程是透明的、及时的,利用多种证据来源[强调添加]并传达进展时,学生就会被授权和参与”(Sturgis &案例y, 2018,第17页)。C-BEN在其《基于能力的教育项目质量框架》(2017年)中包括,CBE模型“使用一系列评估类型和模式来衡量学习和掌握在不同环境中的转移”,并且评估“旨在为学习者提供多种展示能力的机会和方法,包括学习和在新环境和情况下应用(或转移)学习的能力的措施”(第17页)。这些CBE评估最佳实践的来源都没有说明只能使用一种评估方法来评估能力。下面的图1和2说明了评估实例作为“行为快照”的概念,教育工作者从中做出“必然至少有些不准确”的估计或推论(Suskie, 2018,第28页)。在一次会议期间,作者展示了图1中的照片,以说明评估“快照”如何提供有限的信息(Tkatchov &Hugus, 2019)。参与者被要求判断摄影师在图1中拍摄照片时所处的位置。回答包括“在飞机上”和“在外面的田野里”。[在首次在线发表后,于2020年7月11日补充了更正:盲法文本已被参考引文所取代(Tkatchov &Hugus, 2019)。接下来,研究人员向参与者展示了提供额外信息的第二张照片(图2)。通过不同角度的新信息,参与者被要求判断摄影师在拍照时的位置。有了新的信息,包括将摄影师置于建筑物内的证据,回答变成了“在房子或建筑物里看窗外”。有了更多的信息,或者有了第二张快照来补充第一张,参与者就能更准确地判断出拍照时摄影师在哪里。第二张照片提供了更多的广度,让观众更好地捕捉到摄影师的位置,但它失去了第一张照片中明显的云的深度和细节。如两张照片所示,各种评估格式,如选择的回答与绩效评估相结合,可以结合起来相互补充或补充彼此的不足。它们也可以用来处理能力中所代表的不同认知水平。“对于能力水平较低的学生,选择题和其他客观学习测试可能是合适的。然而,在更高的能力水平上,获得更复杂和分析性的思维需要不同类型的评估,如学生叙述、演示、模拟或基于绩效的作业”(Klein-Collins, 2013, p. 12)。一种包含理论和应用能力的互补评估策略将捕捉知识的广度(回忆水平)和知识的深度(应用水平),并且多种评估格式可以结合起来,以完成学习者能力的更完整的图像。 尽管仅使用多项选择评估来衡量大多数能力的不足,但多项选择评估通常是首选的评估格式,因为它们可扩展且成本低。一旦开发了多项选择评估,该评估可以用于评估数万甚至数十万学生,增量可变成本非常小。选择题评估的大部分成本在于评估本身的前端开发。与基于任务的评估或绩效评估的高成本相比,每个学生评估成本的显著差异可能会使预算敏感的机构偏向于选择题评估。各院校必须平衡自己的评估成本考虑和这些考虑对学生的影响。并非所有增加评估成本的决定都对学生不利,评估成本的增加不一定要转嫁给学生。具有成本意识的机构经常可以在其他领域削减成本,并保持现有的成本水平。仅仅因为一个决定是学校的成本意识,如果它降低了学生学习的质量或学生评估的质量——增加了关于学生能力的不准确假设的可能性——那么这个成本意识的决定对学生、学校和所有以能力为基础的教育都是一种伤害。成本与考核质量的合理平衡是一个CBE机构的伦理责任。如果不能恰当地平衡这种道德责任,雇主就有可能对基于能力的学位和证书失去信心,不愿聘用CBE毕业生。为了对学生用法语谈论天气的能力做出有效的判断,可能需要一种结合评估形式的补充评估策略。在成绩评估任务中,学生需要展示他们用法语谈论天气的能力,特别是要掌握正确的发音(结果3)和对问题和评论做出适当回应的能力(结果4)。成绩评估将允许学生在应用层面展示知识的深度,但期望学生表现或教师评估是不现实的。在任何可以想象到的与天气有关的情况下进行对话。为了评估词汇的范围,并推断学生在各种情况下转移所学知识的能力,可以采用选择性反应、客观的评估来捕捉学生在回忆水平上对天气相关词汇(结果1)和正确句子结构(结果2)的知识广度。在这种情况下,至少要使用两种评估格式来创建评估能力的单一评估策略。一半的能力将在较低的水平上进行评估,但内容范围广泛,而另一半将在较高的应用水平上进行评估,但内容范围较窄。这种策略比给学生大量的绩效评估更具可扩展性,但也比只依赖一种评估更可靠。评估的冗余增加了完成学位的不必要时间,这也增加了学费成本。此外,对教师来说过于繁重的评估实践也会给机构带来太多的经济负担,最终给学生带来太多的经济负担。因此,CBE机构应尽可能避免评估中的冗余,并优先考虑评估的可伸缩性。然而,在以能力为基础的高等教育中,必须在最小化教育成本和保持高质量的评估实践之间取得平衡,从而使雇主对以能力为基础的证书的合法性充满信心。当文献有力地支持使用多种形式的评估作为最佳实践时,将CBE中的评估实践限制为仅基于定义不清且未经证实的“双重评估”规则的一种评估形式可能会产生与预期效果相反的效果。它会降低而不是提高能力评估的质量,以及从这些评估中得出的关于学生学习的推断的有效性。CBE社区需要更多的讨论,高等教育机构如何实现CBE的承诺,提供负担得起的高质量教育,为学生毕业后的生活和工作做好准备。旨在管理评估成本和可扩展性的政策应与旨在确保质量的保障措施相平衡,例如,当能力要求对评估进行更多投资或通过多种方式进行最佳评估时,应允许有例外情况。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reconciling assessment quality standards and “double assessment” in competency-based higher education

High standards for assessment practices are essential in all institutions of learning. The role of assessment is arguably even more significant in competency-based education (CBE) institutions since credits and degrees are earned solely based on the demonstration of mastery of competencies through the assessments, and not, as in traditional schooling models, on an average that includes the accumulation of seat time (attendance) and points for activities that do not necessarily indicate competency (e.g., classwork, discussion participation) in addition to assessments.

CBE institutions making the claim that graduates are competent in stated competencies have a responsibility for making the quality of competency assessments a high priority in continual institutional improvement because “in CBE—unlike most traditional programs based on the credit hour—the institution must state with authority that its graduates have demonstrated the learning outcomes required for a degree” (Klein-Collins, 2013, p.7), and “the value of CBE credentials hinges on the reliability and validity of those assessments” in determining graduates' competence (McClarty & Gaertner, 2015, p. 3).

There are commonly accepted standards and best practices for the assessment of learning that apply to all learning models in general as well as assessment concepts that may be specific to the CBE model. One aspect of CBE assessment “best practices,” which was evident in assessment policies and anecdotally in conversations with colleagues at various CBE institutions, was the concept of “double assessment.”

Similar to how the “double jeopardy” clause in the Fifth Amendment of the US Constitution prevents a criminal defendant from being prosecuted more than once for the same crime, a prohibition against “double assessment” in CBE means that once a student has been assessed and has successfully demonstrated mastery of a competency on an assessment, that student should not be assessed on that competency again. “Double assessment” only applies to successful demonstration of mastery of a competency—it does not prohibit or preclude multiple attempts of an assessment when students fail to meet competence on the assessment. Allowing students multiple attempts to pass a competency assessment is a central tenant of CBE.

In addition, “double assessment” is only in reference to summative assessment, which is “conducted to help determine whether a student has attained a certain level of competency” (National Research Council, 2001, p. 40) or “to certify, report on, or evaluate learning” (Brookhart, McTighe, Stiggins, & Wiliam, 2019, p. 6). Using multiple types of formative assessment, or informal assessment that is used to monitor student progress and does not equate to a grade or credit, is common in higher education and viewed as best practice. There is, however, debate over whether using more than one summative assessment to assess students on the same content or learning outcomes is beneficial or whether it is unnecessary and may even inhibit student learning (Beagley & Capaldi, 2016; Domenech, Blazquez, de la Poza, & Munoz-Miquel, 2015; Lawrence, 2013).

The origin of “double assessment” in CBE is difficult to investigate because virtually no literature exists that defines it and explains what it is and what it is not. Literature about assessment best practice in CBE does not specifically and directly address “double assessment”; however, there is some evidence in CBE literature that allows us to infer the purpose of avoiding “double assessment” in CBE programs. For example, a key quality principle that is central to CBE philosophy is that “students advance upon demonstrated mastery” (Sturgis & Casey, 2018, p. 7). Assessing students again on a previously mastered competency could possibly be considered committing “double assessment” because it is preventing students from moving on to a new competency and might be considered the equivalent of seat time or just another hoop to jump through.

Given that CBE is founded on the rejection of seat time as a basis for earning academic credit in exchange for a focus on demonstrated proficiency, CBE program designers strive to eliminate activities that do little to measure proficiency and essentially equate to seat time. To many professionals at CBE institutions, repetition of a competency assessment would not serve the purpose of ensuring mastery of knowledge and skills if mastery has already been demonstrated on an assessment; it would only serve to add time and cost to the students' learning journey. Redundancies in curriculum and assessment that may occur accidentally in traditional, credit- or time-based institutions should be avoided in programs that are intentionally designed around student mastery of distinct competencies (Klein-Collins, 2012). Avoidance of “double assessment” in CBE, then, could be viewed as an effort to eliminate redundancy and reduce the cost of education for students and the institution.

Because “double assessment” is not well defined in literature, it can be interpreted in a variety of ways and perhaps misinterpreted, resulting in practices that hinder rather than promote high-quality competency assessment. For example, some have interpreted “double assessment” to mean that it is against assessment best practice to use more than one type of assessment to assess a single competency, even though using a variety of assessments and collecting multiple samples of evidence when drawing conclusions about students' knowledge are considered assessment best practices (Brookhart et al., 2019; McMillan, 2018; Suskie, 2018). This belief about “double assessment” can result in the use of a single high-stakes assessment to award credit for a competency or even a course when a combination of assessments might actually be needed to draw valid inferences about a particular competency.

The following scenario provides a situation in which more than one form of assessment is desired to draw valid inferences about student proficiency, but in which a misinterpretation of “double assessment” might prevent the best assessment strategy from being used.

According to the book Assessing Student Learning: A Common Sense Guide (3rd edition), an assessment is considered good quality “only if it is the right assessment for the learning goals you want to assess and the decisions you want the resulting evidence to inform” (Suskie, 2018, p. 23). The problem is that any one type of assessment has limitations and in many cases might not be entirely, on its own, the right assessment to provide the needed evidence to “certify” competence (National Research Council, 2001; Suskie, 2018). What if it is determined by experts working on a course that a combination of assessment types is actually needed to obtain the evidence necessary for making valid inferences about student mastery of a competency? “Using a variety of assessments … lets us infer more confidently how well students have achieved key learning goals” (Suskie, 2018, p.28).

Although there are many assessment formats, this paper will focus on two main forms of assessment, selected-response assessment and performance assessment, to compare their benefits and weaknesses.

Selected-response assessments such as multiple-choice, in which students select a correct answer to questions from provided choices, are commonly used because they are objective, and they have the advantage of being auto-graded, which makes them affordable and scalable since they do not require significant faculty time compared to performance assessments. They are also able to provide immediate, automatic feedback about students' performance and, when meaningful feedback is provided, can point students to the areas of the content in which they need remediation. In addition to selected-response assessments' practical advantages, a strategic advantage is that they “do a good job of assessing subject matter and procedural knowledge, and simple understanding, particularly when students must recognize or remember isolated facts, definitions, spellings, concepts, and principles” (McMillan, 2018, p. 77).

Selected-response assessments are, however, limited in their ability to measure a multitude of skills and abilities such as logical reasoning, critical thinking, ethical decision-making, interpersonal “soft” skills, and written communication, just to list a few. Higher education institutions are placing a greater emphasis on assessment activities that promote lifelong learning skills and allow students to tie their learning to real-world problems and contexts so that they can see how their learning will live beyond the classroom (Davidson, 2017).

Specifically in competency-based education, in which there is an inherent focus on application of knowledge in real-world contexts, “a multiple-choice, standardized test is likely inadequate to assess most competencies. Instead, what are required are assignments that present tasks or situations that students will encounter in life and in the workplace” (Klein-Collins, 2013, p.7). A promise of competency-based education is that students will leave the university more competent to enter the workforce because they are required to demonstrate mastery to earn a credential. Objective assessments do not provide students with the opportunity to leave with artifacts showing marketable skills that can be provided to employers as evidence of their competence, nor do they generally give students the opportunity to practice applying skills in a real-world context. They do, however, have some authentic value in programs when students need to pass assessments in a similar format for licensure or certification after graduation, as in teaching, nursing, and accounting, or to gain admittance into graduate school.

Performance assessment, or “open-ended tasks that call upon students to apply their knowledge and skills to create a product or solve a problem” (National Research Council, 2001, p. 29), is seemingly the preferred method of assessment in CBE. There is a big push in CBE institutions to use “authentic” performance assessment as reflected in the Quality Framework for Competency-Based Education Programs (2017) from the Competency-Based Education Network (C-BEN): “Authentic assessments and their corresponding rubrics are key components of CBE, which is anchored by the belief that progress toward a credential should be determined by what learners know and are able to do” (p. 17). But performance assessments also have limitations, such as subjectivity in the evaluation of student performance, and overuse can be taxing for students and faculty who must evaluate the assessments. “Because performance assessments are time intensive for teachers and students, they are usually not the best choice for assessing vast amounts of knowledge” (McMillan, 2018, p. 77). A wide range of facts and terminology, which would easily be assessed using an objective assessment, would not practically or authentically be assessed in a performance task.

This uncertainty is one reason why multiple samples of evidence are preferred to a single assessment for making inferences about student learning.

Standards that are specific to CBE also promote multiple forms of assessment, such as this example from iNACOL's Quality Principles for Competency-Based Education: “Students are empowered and engaged when the process of assessing learning is transparent, timely, draws upon multiple sources of evidence [emphasis added] and communicates progress” (Sturgis & Case y, 2018, p.17). C-BEN includes in its Quality Framework for Competency-Based Education Programs (2017) that CBE models “use a range of assessment types and modalities to measure the transfer of learning and mastery into varied contexts” and that assessments are “designed to provide learners with multiple opportunities and ways to demonstrate competency, including measures for both learning and the ability to apply (or transfer) that learning in novel settings and situations” (p.17). Neither of these sources of CBE assessment best practices state that only one method of assessment can be used to assess a competency.

Figures 1 and 2 below illustrate the concept of assessment instances as “snapshots of behavior” from which educators make estimates or inferences that are “bound to be at least somewhat inaccurate” (Suskie, 2018, p. 28).

During a conference session, the authors presented the photograph in Figure 1 to illustrate how assessment “snapshots” provide limited information (Tkatchov & Hugus, 2019). Participants were asked to make a judgment about where the photographer was located when taking the photograph in Figure 1. Responses included “in an airplane” and “in a field outside.”

[Correction added on July 11, 2020, after first online publication: The blinded text has been replaced with the reference citation (Tkatchov & Hugus, 2019).] Next, participants were shown a second photograph (Figure 2) that provides additional information. With new information from a different angle, participants were asked to make a judgment as to where the photographer was when taking the photograph. With new information to include evidence that situated the photographer inside a building, responses changed to “in a house or building looking out a window.” Having more information, or a second snapshot to complement the first one, allowed the participants to make a more accurate judgment as to where the photographer was when taking the photograph. The second photograph provided more breadth to give the viewer a better snapshot of the photographer's location, but it loses some depth and detail of the clouds that was apparent in the first photo.

As illustrated with the two photographs, a variety of assessment formats, such as a selected response in combination with a performance assessment, can be combined to complement each other or to supplement each other's deficiencies. They can also be used to address the different cognitive levels that are represented in a competency. “At lower levels of competence, multiple-choice and other tests of objective learning may be appropriate. At higher levels of competence, however, getting at more complex and analytical thinking requires different kinds of assessment such as student narratives, demonstrations, simulations, or performance-based assignments” (Klein-Collins, 2013, p. 12). A complementary assessment strategy for competencies that encompass theory and application will capture the breadth of knowledge (the recall level) as well as the depth of knowledge (the application level), and more than one assessment format might be combined to accomplish a more complete picture of learners' competency.

Despite the stated inadequacy of using only multiple-choice assessments for measuring most competencies, multiple-choice assessments are often the preferred assessment format because they are scalable at low cost. Once a multiple-choice assessment is developed, that assessment can be used to assess tens of thousands or, even, hundreds of thousands of students with very small incremental variable cost. Most of the cost of multiple-choice assessment is in the front-end development of the assessment itself. When contrasted against the high cost of evaluating task-based or performance assessments, this significant difference in per-student evaluation cost can bias budget-conscious institutions toward multiple-choice assessment.

Institutions must balance their own assessment cost considerations with how those considerations impact their students. Not all decisions that increase assessment cost are bad for students, and increases in the cost of assessment do not have to be passed on to students. Cost-conscious institutions can frequently lever cost reductions in other areas and maintain existing cost levels. Just because a decision is cost-conscious for the institution, if it degrades the quality of student learning or the quality of student assessment—increasing the likelihood of inaccurate assumptions regarding students' competence—then that cost-conscious decision represents a disservice to students, to the institution, and to all competency-based education. The proper balance of cost and assessment quality is a CBE institution's ethical responsibility. Failing to properly balance this ethical responsibility runs the risk that employers will lose confidence in competency-based degrees and credentials and be reluctant to hire CBE graduates.

A complementary assessment strategy that combines assessment formats might be necessary to make a valid judgment about the students' competency in conversing about the weather in French. Students would need to demonstrate their ability to have a conversation about the weather in French in a performance assessment task, especially to capture correct pronunciation (outcome 3) and the ability to give appropriate responses to questions and comments (outcome 4). The performance assessment would allow for the students to demonstrate depth of knowledge at the application level, but it would not be practical to expect the students to perform, or the faculty to evaluate, conversations in every conceivable weather-related situation. To assess the range of vocabulary and make an inference about students' ability to transfer their learning in a variety of situations, a selected-response, objective assessment might be used to capture the breadth of students' knowledge of weather-related vocabulary (outcome 1) and correct sentence structure (outcome 2) at the recall level.

In this case, at least two assessment formats would be used to create a single assessment strategy to assess a competency. Half of the competency would be assessed at the lower level but over a broad range of content, while the other half would be assessed at a higher level of application but over a narrower range of content. This strategy would be more scalable than giving the students a multitude of performance assessments, but it would be more dependable than relying on only one assessment.

Redundancy in assessment adds unnecessary time to degree completion, which also increases the cost of tuition. In addition, assessment practices that are overly burdensome for faculty can also place too much of a financial burden on an institution and, ultimately, the students. Therefore, CBE institutions are wise to avoid redundancy in assessment whenever possible and to prioritize the scalability of assessments. However, there must be a balance in competency-based higher education between minimizing the cost of education and maintaining high-quality assessment practices that give employers confidence in the legitimacy of competency-based credentials.

When literature resoundingly supports the use of multiple forms of assessment as a best practice, restricting assessment practices in CBE to only one form of assessment based on an ill-defined and unsubstantiated “double assessment” rule can have the opposite of its intended effect. It can reduce, not enhance, the quality of competency assessments and the validity of inferences about student learning derived from those assessments.

More discussion is needed among the CBE community about how higher education institutions can deliver on CBE's promise of providing an affordable and high-quality education that prepares students for life and work after graduation. Policies intended to manage cost and scalability in assessment should be counterbalanced by safeguards intended to ensure quality, such as allowances for exceptions when competencies call for a greater investment in assessment or are best assessed through multiple modalities. Further research and experimentation in scalable performance assessments and hybrid assessments that combine formats are important next steps in CBE innovation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Exploring secondary teachers' perspectives on implementing competency-based education The impact of student recognition of excellence to student outcome in a competency-based educational model Issue Information JCBE editorial
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1