Daniel F. McCaffrey, Jodi M. Casabianca, Kathryn L. Ricker-Pedley, René R. Lawless, Cathy Wendler
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses. Best Practices for Constructed-Response Scoring is designed not to act as an independent guide, but rather to be used in conjunction with other ETS publications, including the Guidelines for Constructed-Response and Other Performance Assessments, ETS Standards for Quality and Fairness, ETS Guidelines for Fair Tests and Communications, and the ETS Guidelines for Fairness.
{"title":"Best Practices for Constructed-Response Scoring","authors":"Daniel F. McCaffrey, Jodi M. Casabianca, Kathryn L. Ricker-Pedley, René R. Lawless, Cathy Wendler","doi":"10.1002/ets2.12358","DOIUrl":"10.1002/ets2.12358","url":null,"abstract":"<p>This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses. <i>Best Practices for Constructed-Response Scoring</i> is designed not to act as an independent guide, but rather to be used in conjunction with other ETS publications, including the <i>Guidelines for Constructed-Response and Other Performance Assessments, ETS Standards for Quality and Fairness, ETS Guidelines for Fair Tests and Communications</i>, and the <i>ETS Guidelines for Fairness</i>.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-58"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12358","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45078904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John R. Donoghue, Catherine A. McClellan, Melinda R. Hess
When constructed-response items are administered for a second time, it is necessary to evaluate whether the current Time B administration's raters have drifted from the scoring of the original administration at Time A. To study this, Time A papers are sampled and rescored by Time B scorers. Commonly the scores are compared using the proportion of exact agreement across times and/or t-statistics comparing Time A means to Time B means. It is common to treat these rescores with procedures that assume a multinomial sampling model, which is incorrect. The correct, product-multinomial model reflects the stratification of Time A scores. Using direct computation, the research report demonstrates that both proportion of exact agreement and the t-statistic can deviate substantially from expected behavior, providing misleading results. Reweighting the rescore table gives each statistic the correct expected value but does not guarantee that the usual sampling distributions hold. It is also noted that the results apply to a wider class of situations in which a set of papers is scored by one group of raters or scoring engine and then a sample is selected to be evaluated by a different group of raters or scoring engine.
{"title":"Investigating Constructed-Response Scoring Over Time: The Effects of Study Design on Trend Rescore Statistics","authors":"John R. Donoghue, Catherine A. McClellan, Melinda R. Hess","doi":"10.1002/ets2.12360","DOIUrl":"10.1002/ets2.12360","url":null,"abstract":"<p>When constructed-response items are administered for a second time, it is necessary to evaluate whether the current Time B administration's raters have drifted from the scoring of the original administration at Time A. To study this, Time A papers are sampled and rescored by Time B scorers. Commonly the scores are compared using the proportion of exact agreement across times and/or <i>t</i>-statistics comparing Time A means to Time B means. It is common to treat these rescores with procedures that assume a multinomial sampling model, which is incorrect. The correct, product-multinomial model reflects the stratification of Time A scores. Using direct computation, the research report demonstrates that both proportion of exact agreement and the <i>t</i>-statistic can deviate substantially from expected behavior, providing misleading results. Reweighting the rescore table gives each statistic the correct expected value but does not guarantee that the usual sampling distributions hold. It is also noted that the results apply to a wider class of situations in which a set of papers is scored by one group of raters or scoring engine and then a sample is selected to be evaluated by a different group of raters or scoring engine.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12360","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51134089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David M. Klieger, Kevin M. Williams, Jennifer L. Bochenek, Chelsea Ezzo, Teresa Jackson
Results from two studies provided strong evidence for the validity of the HiSET® tests, thereby demonstrating that HiSET is a well-developed battery of tests with passing and college and career readiness (CCR) standards that, when met, provide a pathway to postsecondary education, better employment opportunities and wages, and a better quality of life to those who are unable to experience a traditional high school education. Positive relationships exist between HiSET scores and both high school grades and ACT scores, including high levels of agreement between HiSET CCR indicators and ACT CCR indicators. Therefore, evidence supports the claim that HiSET scores are measures of high school equivalency, preparedness for middle skills jobs, and college readiness. Furthermore, there is evidence that passing the HiSET provides value to stakeholders. Passing the HiSET battery is associated with gaining academic and personal skills, college enrollment, employment gains (e.g., obtaining employment, obtaining more full-time employment, wage increases, and improvement in a job or position), and quality-of-life improvements.
{"title":"Validating HiSET® Tests as High School Equivalency Tests That Improve Educational, Vocational, and Quality-of-Life Outcomes","authors":"David M. Klieger, Kevin M. Williams, Jennifer L. Bochenek, Chelsea Ezzo, Teresa Jackson","doi":"10.1002/ets2.12359","DOIUrl":"10.1002/ets2.12359","url":null,"abstract":"<p>Results from two studies provided strong evidence for the validity of the <i>HiSET</i>® tests, thereby demonstrating that HiSET is a well-developed battery of tests with passing and college and career readiness (CCR) standards that, when met, provide a pathway to postsecondary education, better employment opportunities and wages, and a better quality of life to those who are unable to experience a traditional high school education. Positive relationships exist between HiSET scores and both high school grades and ACT scores, including high levels of agreement between HiSET CCR indicators and ACT CCR indicators. Therefore, evidence supports the claim that HiSET scores are measures of high school equivalency, preparedness for middle skills jobs, and college readiness. Furthermore, there is evidence that passing the HiSET provides value to stakeholders. Passing the HiSET battery is associated with gaining academic and personal skills, college enrollment, employment gains (e.g., obtaining employment, obtaining more full-time employment, wage increases, and improvement in a job or position), and quality-of-life improvements.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-31"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12359","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43919969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David M. Klieger, Lauren J. Kotloff, Vinetha Belur, Megan E. Schramm-Possinger, Steven L. Holtzman, Hezekiah Bunde
Intended consequences of giving applicants the option to select which test scores to report include potentially reducing measurement error and inequity in applicants' prior test familiarity. Our first study determined whether score choice options resulted in unintended consequences for lower performing subgroups by detrimentally increasing score gaps in ways and for reasons that the research literature had suggested. Our follow-up study explored possible determinants of changes in score gaps attributable to score choice options. Using GRE® SCORESELECT®, the score choice system for the GRE general test, we concluded that unintended consequences were few, small in magnitude, and usually undetectable. To the extent that unintended consequences occurred, they were limited to effects for citizenship subgroups and generally benefited lower performing subgroups.
{"title":"Studies of Possible Effects of GRE® ScoreSelect® on Subgroup Differences in GRE® General Test Scores","authors":"David M. Klieger, Lauren J. Kotloff, Vinetha Belur, Megan E. Schramm-Possinger, Steven L. Holtzman, Hezekiah Bunde","doi":"10.1002/ets2.12356","DOIUrl":"10.1002/ets2.12356","url":null,"abstract":"<p>Intended consequences of giving applicants the option to select which test scores to report include potentially reducing measurement error and inequity in applicants' prior test familiarity. Our first study determined whether score choice options resulted in unintended consequences for lower performing subgroups by detrimentally increasing score gaps in ways and for reasons that the research literature had suggested. Our follow-up study explored possible determinants of changes in score gaps attributable to score choice options. Using <i>GRE® SCORESELECT®</i>, the score choice system for the GRE general test, we concluded that unintended consequences were few, small in magnitude, and usually undetectable. To the extent that unintended consequences occurred, they were limited to effects for citizenship subgroups and generally benefited lower performing subgroups.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-33"},"PeriodicalIF":0.0,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44763775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongwen Guo, Ru Lu, Matthew S. Johnson, Dan F. McCaffrey
It is desirable for an educational assessment to be constructed of items that can differentiate different performance levels of test takers, and thus it is important to estimate accurately the item discrimination parameters in either classical test theory or item response theory. It is particularly challenging to do so when the sample sizes are small. The current study reexamined the relationship between the biserial correlation coefficient and the discrimination parameter to investigate whether the biserial correlation coefficient estimator could be modified and whether biserial-based estimators could be used as alternate estimates of the item discrimination indices. Results show that the modified and alternative approaches work slightly better under certain circumstances (e.g., for small sample sizes or shorter tests), assuming normality of the latent ability distribution. Applications of these alternative estimators are presented in item scaling and weighted differential item functioning analyses. Recommendations and limitations are discussed for practical use of these proposed methods.
{"title":"Alternative Methods for Item Parameter Estimation: From CTT to IRT","authors":"Hongwen Guo, Ru Lu, Matthew S. Johnson, Dan F. McCaffrey","doi":"10.1002/ets2.12355","DOIUrl":"10.1002/ets2.12355","url":null,"abstract":"<p>It is desirable for an educational assessment to be constructed of items that can differentiate different performance levels of test takers, and thus it is important to estimate accurately the item discrimination parameters in either classical test theory or item response theory. It is particularly challenging to do so when the sample sizes are small. The current study reexamined the relationship between the biserial correlation coefficient and the discrimination parameter to investigate whether the biserial correlation coefficient estimator could be modified and whether biserial-based estimators could be used as alternate estimates of the item discrimination indices. Results show that the modified and alternative approaches work slightly better under certain circumstances (e.g., for small sample sizes or shorter tests), assuming normality of the latent ability distribution. Applications of these alternative estimators are presented in item scaling and weighted differential item functioning analyses. Recommendations and limitations are discussed for practical use of these proposed methods.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12355","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49379524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.
{"title":"Technology-Enhanced Items and Model–Data Misfit","authors":"Carol Eckerly, Yue Jia, Paul Jewsbury","doi":"10.1002/ets2.12353","DOIUrl":"10.1002/ets2.12353","url":null,"abstract":"<p>Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41321818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In response to the calls for making key stakeholders' perspectives relevant in the test validation process, the study discussed in this report sought test-taker feedback as part of collecting validity evidence and supporting the ongoing field testing efforts of the new TOEFL ITP® Speaking section. Specifically, I aimed to investigate the extent to which test takers' perceptions of the newly proposed ITP test tasks are in agreement with the intended characteristics and qualities of the tasks. In addition, I opted to gather insights into whether the speaking tasks are perceived as acceptable by its prospective test takers and also to identify any unwarranted challenges posed for them in completing the tasks. A two-part questionnaire was thus administered during field testing of the new speaking section, and resulting data were analyzed both quantitatively and qualitatively. Findings emerging from the questionnaire data suggest that test-taker perceptions can be used to provide support to corroborate the intended (or hypothesized) properties of the tasks, while pointing to several areas for further monitoring and improvement.
{"title":"Evaluating the New TOEFL ITP® Speaking Test: Insights From Field Test Takers","authors":"Shinhye Lee","doi":"10.1002/ets2.12352","DOIUrl":"10.1002/ets2.12352","url":null,"abstract":"<p>In response to the calls for making key stakeholders' perspectives relevant in the test validation process, the study discussed in this report sought test-taker feedback as part of collecting validity evidence and supporting the ongoing field testing efforts of the new <i>TOEFL ITP</i>® Speaking section. Specifically, I aimed to investigate the extent to which test takers' perceptions of the newly proposed ITP test tasks are in agreement with the intended characteristics and qualities of the tasks. In addition, I opted to gather insights into whether the speaking tasks are perceived as acceptable by its prospective test takers and also to identify any unwarranted challenges posed for them in completing the tasks. A two-part questionnaire was thus administered during field testing of the new speaking section, and resulting data were analyzed both quantitatively and qualitatively. Findings emerging from the questionnaire data suggest that test-taker perceptions can be used to provide support to corroborate the intended (or hypothesized) properties of the tasks, while pointing to several areas for further monitoring and improvement.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-19"},"PeriodicalIF":0.0,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12352","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47594887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Haviland, Steve Robbins, Dessi Kirova, Jennifer Bochenek, Dan Fishtein
Noncredit community college programs provide an important route for workforce development. They offer affordable and accessible short-term training options for individuals seeking access to middle-skills jobs. Absent the burdens of accreditation standards, they can respond nimbly to local labor market needs. However, they can also be varied and confusing, and despite the high volume of students that they serve, they are an underexamined area in higher education. This study examines noncredit programs in the New York City labor market to determine how schools align noncredit offerings to the labor market, focusing on credential design, competencies, and market processes. It pursues a push–pull design through a combination of document review and interviews with school leaders and employers and introduces quality taxonomy for understanding employer engagement in individual programs. Implications for students, programs, schools, and employers are explored.
The executive summary for this report can be downloaded at https://www.ets.org/Media/Research/pdf/Executive_Summary_RR-22-09.pdf
{"title":"Noncredit Career and Technical Community College Programs as a Bridge to Employers: Report on NYC Region Study","authors":"Sara Haviland, Steve Robbins, Dessi Kirova, Jennifer Bochenek, Dan Fishtein","doi":"10.1002/ets2.12351","DOIUrl":"10.1002/ets2.12351","url":null,"abstract":"<p>Noncredit community college programs provide an important route for workforce development. They offer affordable and accessible short-term training options for individuals seeking access to middle-skills jobs. Absent the burdens of accreditation standards, they can respond nimbly to local labor market needs. However, they can also be varied and confusing, and despite the high volume of students that they serve, they are an underexamined area in higher education. This study examines noncredit programs in the New York City labor market to determine how schools align noncredit offerings to the labor market, focusing on credential design, competencies, and market processes. It pursues a push–pull design through a combination of document review and interviews with school leaders and employers and introduces quality taxonomy for understanding employer engagement in individual programs. Implications for students, programs, schools, and employers are explored.</p><p>The executive summary for this report can be downloaded at \u0000https://www.ets.org/Media/Research/pdf/Executive_Summary_RR-22-09.pdf</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-26"},"PeriodicalIF":0.0,"publicationDate":"2022-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48593441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangming Ling, Jean Williams, Sue O'Brien, Carlos F. Cavalie
Recognizing the appealing features of a tablet (e.g., an iPad), including size, mobility, touch screen display, and virtual keyboard, more educational professionals are moving away from larger laptop and desktop computers and turning to the iPad for their daily work, such as reading and writing. Following the results of a recent survey of individuals who serve as ETS raters, more than 40% reported that they would prefer to use an iPad or other type of tablet to score essays. However, iPad-based essay scoring could affect scoring accuracy and scoring time because the smaller screen and other features of an iPad may also affect raters' reading comprehension and score assigning processes. To address this issue, we invited 10 experienced raters to score holistically 40 essays for a graduate admission test using a desktop computer and an iPad following a counterbalanced design. We compared the raters' scores against the criterion scores and analyzed scoring times, scoring behaviors, and raters' answers to a structured interview after the scoring experiment. The results reveal no obvious differences between the two devices in the scoring accuracy or average scoring time per essay, which suggests that scoring on an iPad may not reduce scoring quality or scoring productivity for essays that are holistically scored as compared to scoring the essays on a desktop computer. We also found a few iPad-specific issues that raters reported, including issues associated with the invisible scrolling bar and the extra scrolling needed to reach the score-assignment panel, difficulty navigating between the prompt and the essay response, and oversensitivity of the touch screen.
{"title":"Scoring Essays on an iPad Versus a Desktop Computer: An Exploratory Study","authors":"Guangming Ling, Jean Williams, Sue O'Brien, Carlos F. Cavalie","doi":"10.1002/ets2.12349","DOIUrl":"10.1002/ets2.12349","url":null,"abstract":"<p>Recognizing the appealing features of a tablet (e.g., an iPad), including size, mobility, touch screen display, and virtual keyboard, more educational professionals are moving away from larger laptop and desktop computers and turning to the iPad for their daily work, such as reading and writing. Following the results of a recent survey of individuals who serve as ETS raters, more than 40% reported that they would prefer to use an iPad or other type of tablet to score essays. However, iPad-based essay scoring could affect scoring accuracy and scoring time because the smaller screen and other features of an iPad may also affect raters' reading comprehension and score assigning processes. To address this issue, we invited 10 experienced raters to score holistically 40 essays for a graduate admission test using a desktop computer and an iPad following a counterbalanced design. We compared the raters' scores against the criterion scores and analyzed scoring times, scoring behaviors, and raters' answers to a structured interview after the scoring experiment. The results reveal no obvious differences between the two devices in the scoring accuracy or average scoring time per essay, which suggests that scoring on an iPad may not reduce scoring quality or scoring productivity for essays that are holistically scored as compared to scoring the essays on a desktop computer. We also found a few iPad-specific issues that raters reported, including issues associated with the invisible scrolling bar and the extra scrolling needed to reach the score-assignment panel, difficulty navigating between the prompt and the essay response, and oversensitivity of the touch screen.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12349","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43181369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The time is now to examine the nation's capacity to help guide students in gaining access to, paying for, and graduating from college. College promise programs have served as an excellent model. But because a uniform, national college promise model would not adequately serve the estimated 20 million students in postsecondary education, ETS and College Promise launched an effort to expand the work on college promise programs to identify ecosystems of support for specific student populations. In 2021, we invited scholars, practitioners, and student representatives to join a design team and cocreate the college promise program for their student populations: first-generation students, youth in or aged out of foster care, students with disabilities, student parents, and students needing academic support. In multiple panel discussions, other colleagues reviewed the ecosystem designs, focusing on college promise programs in general, the design of the ecosystems of support, or the financing of the ecosystems. Several key themes emerged from the meeting: (a) Although the design teams focused on one aspect of a student's life, they stressed the importance of focusing on the intersectionality of their identities; (b) terminology and definitions are important not only for policy and practice reasons but for the messages they send to students about inclusion; (c) financing a college education is more than paying tuition and fees; (d) enhanced data collection will support research, policy, and practice; and (e) developing a college promise program requires a focus on both students and postsecondary institutions.
{"title":"Expanding Promise: Depicting the Ecosystems of Support and Financial Sustainability for Five College Promise Populations","authors":"Catherine M. Millett","doi":"10.1002/ets2.12350","DOIUrl":"10.1002/ets2.12350","url":null,"abstract":"<p>The time is now to examine the nation's capacity to help guide students in gaining access to, paying for, and graduating from college. College promise programs have served as an excellent model. But because a uniform, national college promise model would not adequately serve the estimated 20 million students in postsecondary education, ETS and College Promise launched an effort to expand the work on college promise programs to identify ecosystems of support for specific student populations. In 2021, we invited scholars, practitioners, and student representatives to join a design team and cocreate the college promise program for their student populations: first-generation students, youth in or aged out of foster care, students with disabilities, student parents, and students needing academic support. In multiple panel discussions, other colleagues reviewed the ecosystem designs, focusing on college promise programs in general, the design of the ecosystems of support, or the financing of the ecosystems. Several key themes emerged from the meeting: (a) Although the design teams focused on one aspect of a student's life, they stressed the importance of focusing on the intersectionality of their identities; (b) terminology and definitions are important not only for policy and practice reasons but for the messages they send to students about inclusion; (c) financing a college education is more than paying tuition and fees; (d) enhanced data collection will support research, policy, and practice; and (e) developing a college promise program requires a focus on both students and postsecondary institutions.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-110"},"PeriodicalIF":0.0,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44149681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}