Andrew N. Garman, Melanie P. Standish, Dae Hyun Kim
{"title":"Enhancing efficiency, reliability, and rigor in competency model analysis using natural language processing","authors":"Andrew N. Garman, Melanie P. Standish, Dae Hyun Kim","doi":"10.1002/cbe2.1164","DOIUrl":null,"url":null,"abstract":"<div>\n \n <section>\n \n <h3> Background</h3>\n \n <p>Competency modeling is frequently used in higher education and workplace settings to inform a variety of learning and performance improvement programs. However, approaches commonly taken to modeling tasks can be very labor-intensive, and are vulnerable to perceptual and experience biases of raters.</p>\n </section>\n \n <section>\n \n <h3> Aims</h3>\n \n <p>The present study assesses the potential for natural language processing (NLP) to support competency-related tasks, by developing a baseline comparison of results generated by NLP to results generated by human raters.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Two raters separately conducted cross-walks for leadership competency models of graduate healthcare management programs from eight universities against a newly validated competency model from the National Center for Healthcare Leadership containing 28 competencies, to create 224 cross-walked pairs of “best matches”.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Results indicated that the NLP model performed at least as accurately as human raters, who required a total of 16 work hours to complete, versus the NLP calculations which were nearly instantaneous.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Based on these findings, we conclude that NLP has substantial promise as a high-efficiency adjunct to human evaluations in competency cross-walks.</p>\n </section>\n </div>","PeriodicalId":101234,"journal":{"name":"The Journal of Competency-Based Education","volume":"3 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/cbe2.1164","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Competency-Based Education","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cbe2.1164","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Background
Competency modeling is frequently used in higher education and workplace settings to inform a variety of learning and performance improvement programs. However, approaches commonly taken to modeling tasks can be very labor-intensive, and are vulnerable to perceptual and experience biases of raters.
Aims
The present study assesses the potential for natural language processing (NLP) to support competency-related tasks, by developing a baseline comparison of results generated by NLP to results generated by human raters.
Methods
Two raters separately conducted cross-walks for leadership competency models of graduate healthcare management programs from eight universities against a newly validated competency model from the National Center for Healthcare Leadership containing 28 competencies, to create 224 cross-walked pairs of “best matches”.
Results
Results indicated that the NLP model performed at least as accurately as human raters, who required a total of 16 work hours to complete, versus the NLP calculations which were nearly instantaneous.
Conclusion
Based on these findings, we conclude that NLP has substantial promise as a high-efficiency adjunct to human evaluations in competency cross-walks.