利用自然语言处理提高能力素质模型分析的效率、可靠性和严谨性

Andrew N. Garman, Melanie P. Standish, Dae Hyun Kim
{"title":"利用自然语言处理提高能力素质模型分析的效率、可靠性和严谨性","authors":"Andrew N. Garman,&nbsp;Melanie P. Standish,&nbsp;Dae Hyun Kim","doi":"10.1002/cbe2.1164","DOIUrl":null,"url":null,"abstract":"<div>\n \n <section>\n \n <h3> Background</h3>\n \n <p>Competency modeling is frequently used in higher education and workplace settings to inform a variety of learning and performance improvement programs. However, approaches commonly taken to modeling tasks can be very labor-intensive, and are vulnerable to perceptual and experience biases of raters.</p>\n </section>\n \n <section>\n \n <h3> Aims</h3>\n \n <p>The present study assesses the potential for natural language processing (NLP) to support competency-related tasks, by developing a baseline comparison of results generated by NLP to results generated by human raters.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Two raters separately conducted cross-walks for leadership competency models of graduate healthcare management programs from eight universities against a newly validated competency model from the National Center for Healthcare Leadership containing 28 competencies, to create 224 cross-walked pairs of “best matches”.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Results indicated that the NLP model performed at least as accurately as human raters, who required a total of 16 work hours to complete, versus the NLP calculations which were nearly instantaneous.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Based on these findings, we conclude that NLP has substantial promise as a high-efficiency adjunct to human evaluations in competency cross-walks.</p>\n </section>\n </div>","PeriodicalId":101234,"journal":{"name":"The Journal of Competency-Based Education","volume":"3 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/cbe2.1164","citationCount":"4","resultStr":"{\"title\":\"Enhancing efficiency, reliability, and rigor in competency model analysis using natural language processing\",\"authors\":\"Andrew N. Garman,&nbsp;Melanie P. Standish,&nbsp;Dae Hyun Kim\",\"doi\":\"10.1002/cbe2.1164\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Competency modeling is frequently used in higher education and workplace settings to inform a variety of learning and performance improvement programs. However, approaches commonly taken to modeling tasks can be very labor-intensive, and are vulnerable to perceptual and experience biases of raters.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Aims</h3>\\n \\n <p>The present study assesses the potential for natural language processing (NLP) to support competency-related tasks, by developing a baseline comparison of results generated by NLP to results generated by human raters.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>Two raters separately conducted cross-walks for leadership competency models of graduate healthcare management programs from eight universities against a newly validated competency model from the National Center for Healthcare Leadership containing 28 competencies, to create 224 cross-walked pairs of “best matches”.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Results indicated that the NLP model performed at least as accurately as human raters, who required a total of 16 work hours to complete, versus the NLP calculations which were nearly instantaneous.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>Based on these findings, we conclude that NLP has substantial promise as a high-efficiency adjunct to human evaluations in competency cross-walks.</p>\\n </section>\\n </div>\",\"PeriodicalId\":101234,\"journal\":{\"name\":\"The Journal of Competency-Based Education\",\"volume\":\"3 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1002/cbe2.1164\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Competency-Based Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cbe2.1164\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Competency-Based Education","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cbe2.1164","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

背景能力模型经常用于高等教育和工作场所,为各种学习和绩效改进计划提供信息。然而,通常用于建模任务的方法可能是非常劳动密集型的,并且容易受到评分者的感知和经验偏差的影响。本研究通过将自然语言处理(NLP)产生的结果与人类评分者产生的结果进行基线比较,评估了自然语言处理(NLP)支持能力相关任务的潜力。方法两名评分者分别对八所大学医疗保健管理专业研究生的领导胜任力模型与国家医疗保健领导中心新验证的包含28个胜任力的胜任力模型进行交叉行走,产生224对“最佳匹配”。结果表明,NLP模型至少与人类评分者一样准确,后者需要总共16个工作小时才能完成,而NLP计算几乎是瞬间完成的。基于这些发现,我们得出结论,NLP作为一种高效的辅助工具,在能力交叉评估中具有很大的前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Enhancing efficiency, reliability, and rigor in competency model analysis using natural language processing

Background

Competency modeling is frequently used in higher education and workplace settings to inform a variety of learning and performance improvement programs. However, approaches commonly taken to modeling tasks can be very labor-intensive, and are vulnerable to perceptual and experience biases of raters.

Aims

The present study assesses the potential for natural language processing (NLP) to support competency-related tasks, by developing a baseline comparison of results generated by NLP to results generated by human raters.

Methods

Two raters separately conducted cross-walks for leadership competency models of graduate healthcare management programs from eight universities against a newly validated competency model from the National Center for Healthcare Leadership containing 28 competencies, to create 224 cross-walked pairs of “best matches”.

Results

Results indicated that the NLP model performed at least as accurately as human raters, who required a total of 16 work hours to complete, versus the NLP calculations which were nearly instantaneous.

Conclusion

Based on these findings, we conclude that NLP has substantial promise as a high-efficiency adjunct to human evaluations in competency cross-walks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Exploring secondary teachers' perspectives on implementing competency-based education The impact of student recognition of excellence to student outcome in a competency-based educational model Issue Information JCBE editorial
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1