{"title":"对口译能力倾向测试的预测有效性的质疑:一个系统的方法回顾","authors":"Chao Han","doi":"10.1080/1750399X.2023.2170049","DOIUrl":null,"url":null,"abstract":"ABSTRACT Aptitude testing is used to select candidates with the greatest potential for professional interpreter training. Implicit in this practice is the expectation that aptitude test scores predict future performance. As such, the predictive validity of score-based inferences and decisions constitutes an important rationale for aptitude testing. Although researchers have provided predictive validity evidence for different aptitudinal variables, very little research has examined the substantive meaning and robustness of such evidence. We therefore conducted this systematic review to investigate or interrogate the methodological rigour of quantitatively-based prospective cohort studies of aptitude for interpreting, focusing on the substantive meaning, psychometric soundness, and statistical analytic rigour underpinning their predictive validity evidence. Our meta-evaluation of 18 eligible studies, identified through a rigorous search and screening process, shows a diverse array of practices in the operationalisation, analysis, and reporting of aptitude tests, interpreting performance assessments, and related validity evidence. Main patterns include the collection of mostly single-site data (i.e., from a single institution), use of self-designed instruments for testing aptitude, and under-reporting of key information on measurement and statistical procedures. These findings could help researchers better interpret existing validity evidence and design future research on aptitude testing.","PeriodicalId":45693,"journal":{"name":"Interpreter and Translator Trainer","volume":"17 1","pages":"7 - 28"},"PeriodicalIF":1.8000,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interrogating the predictive validity of aptitude testing for interpreting: a systematic methodological review\",\"authors\":\"Chao Han\",\"doi\":\"10.1080/1750399X.2023.2170049\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Aptitude testing is used to select candidates with the greatest potential for professional interpreter training. Implicit in this practice is the expectation that aptitude test scores predict future performance. As such, the predictive validity of score-based inferences and decisions constitutes an important rationale for aptitude testing. Although researchers have provided predictive validity evidence for different aptitudinal variables, very little research has examined the substantive meaning and robustness of such evidence. We therefore conducted this systematic review to investigate or interrogate the methodological rigour of quantitatively-based prospective cohort studies of aptitude for interpreting, focusing on the substantive meaning, psychometric soundness, and statistical analytic rigour underpinning their predictive validity evidence. Our meta-evaluation of 18 eligible studies, identified through a rigorous search and screening process, shows a diverse array of practices in the operationalisation, analysis, and reporting of aptitude tests, interpreting performance assessments, and related validity evidence. Main patterns include the collection of mostly single-site data (i.e., from a single institution), use of self-designed instruments for testing aptitude, and under-reporting of key information on measurement and statistical procedures. These findings could help researchers better interpret existing validity evidence and design future research on aptitude testing.\",\"PeriodicalId\":45693,\"journal\":{\"name\":\"Interpreter and Translator Trainer\",\"volume\":\"17 1\",\"pages\":\"7 - 28\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Interpreter and Translator Trainer\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1080/1750399X.2023.2170049\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interpreter and Translator Trainer","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/1750399X.2023.2170049","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Interrogating the predictive validity of aptitude testing for interpreting: a systematic methodological review
ABSTRACT Aptitude testing is used to select candidates with the greatest potential for professional interpreter training. Implicit in this practice is the expectation that aptitude test scores predict future performance. As such, the predictive validity of score-based inferences and decisions constitutes an important rationale for aptitude testing. Although researchers have provided predictive validity evidence for different aptitudinal variables, very little research has examined the substantive meaning and robustness of such evidence. We therefore conducted this systematic review to investigate or interrogate the methodological rigour of quantitatively-based prospective cohort studies of aptitude for interpreting, focusing on the substantive meaning, psychometric soundness, and statistical analytic rigour underpinning their predictive validity evidence. Our meta-evaluation of 18 eligible studies, identified through a rigorous search and screening process, shows a diverse array of practices in the operationalisation, analysis, and reporting of aptitude tests, interpreting performance assessments, and related validity evidence. Main patterns include the collection of mostly single-site data (i.e., from a single institution), use of self-designed instruments for testing aptitude, and under-reporting of key information on measurement and statistical procedures. These findings could help researchers better interpret existing validity evidence and design future research on aptitude testing.