Leveraging LLMs for optimised feature selection and embedding in structured data: A case study on graduate employment classification

Radiah Haque, Hui-Ngo Goh, Choo-Yee Ting, Albert Quek, M.D. Rakibul Hasan
{"title":"Leveraging LLMs for optimised feature selection and embedding in structured data: A case study on graduate employment classification","authors":"Radiah Haque,&nbsp;Hui-Ngo Goh,&nbsp;Choo-Yee Ting,&nbsp;Albert Quek,&nbsp;M.D. Rakibul Hasan","doi":"10.1016/j.caeai.2024.100356","DOIUrl":null,"url":null,"abstract":"<div><div>The application of Machine Learning (ML) for predicting graduate student employability is a growing area of research, driven by the need to align educational outcomes with job market requirements. In this context, this paper investigates the application of Large Language Models (LLMs) for tabular data transformation and embedding, specifically using Bidirectional Encoder Representations from Transformers (BERT), to enhance the performance of ML models in binary classification tasks for student employability prediction. The primary objective is to determine whether converting structured data into text format improves model accuracy. The study involves several ML models including Artificial Neural Networks (ANN), CatBoost, and BERT classifier. The focus is on predicting the employment status of graduate students based on demographic, academic, and graduate tracer study data, collected from over 4000 university graduates. Feature selection methods, including Boruta and Extra Tree Classifier (ETC) are employed to identify the optimal feature set, guided by a sliding window algorithm for automatic feature selection. The models are trained in four stages: 1) original dataset without feature selection or word embedding, 2) dataset with selected optimal features, 3) transformed data with word embedding, and 4) transformed data with feature selection applied both before and after word embedding. The baseline model (without feature selection and embedding) achieved the highest accuracy with the ANN model (79%). Subsequently, applying ETC for feature selection improved accuracy, with CatBoost achieving 83%. Further transformation with BERT-based embeddings raised the highest accuracy to 85% using the BERT classifier. Finally, the optimal accuracy of 88% was obtained by applying feature selection before and after embedding, with the BERT-Boruta model. The findings from this study demonstrate that using the dual-stage feature selection approach in combination with BERT embedding significantly increases the classification accuracy. This highlights the potential of LLMs in transforming tabular data for enhanced graduate employment prediction.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100356"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X24001590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

The application of Machine Learning (ML) for predicting graduate student employability is a growing area of research, driven by the need to align educational outcomes with job market requirements. In this context, this paper investigates the application of Large Language Models (LLMs) for tabular data transformation and embedding, specifically using Bidirectional Encoder Representations from Transformers (BERT), to enhance the performance of ML models in binary classification tasks for student employability prediction. The primary objective is to determine whether converting structured data into text format improves model accuracy. The study involves several ML models including Artificial Neural Networks (ANN), CatBoost, and BERT classifier. The focus is on predicting the employment status of graduate students based on demographic, academic, and graduate tracer study data, collected from over 4000 university graduates. Feature selection methods, including Boruta and Extra Tree Classifier (ETC) are employed to identify the optimal feature set, guided by a sliding window algorithm for automatic feature selection. The models are trained in four stages: 1) original dataset without feature selection or word embedding, 2) dataset with selected optimal features, 3) transformed data with word embedding, and 4) transformed data with feature selection applied both before and after word embedding. The baseline model (without feature selection and embedding) achieved the highest accuracy with the ANN model (79%). Subsequently, applying ETC for feature selection improved accuracy, with CatBoost achieving 83%. Further transformation with BERT-based embeddings raised the highest accuracy to 85% using the BERT classifier. Finally, the optimal accuracy of 88% was obtained by applying feature selection before and after embedding, with the BERT-Boruta model. The findings from this study demonstrate that using the dual-stage feature selection approach in combination with BERT embedding significantly increases the classification accuracy. This highlights the potential of LLMs in transforming tabular data for enhanced graduate employment prediction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用法学硕士优化特征选择和嵌入结构化数据:毕业生就业分类案例研究
由于需要使教育成果与就业市场需求保持一致,机器学习(ML)用于预测研究生就业能力的应用是一个不断发展的研究领域。在此背景下,本文研究了大型语言模型(llm)在表格数据转换和嵌入中的应用,特别是使用来自变形器的双向编码器表示(BERT),以提高ML模型在用于学生就业能力预测的二元分类任务中的性能。主要目标是确定将结构化数据转换为文本格式是否可以提高模型的准确性。该研究涉及几个ML模型,包括人工神经网络(ANN)、CatBoost和BERT分类器。重点是根据从4000多名大学毕业生中收集的人口统计、学术和研究生跟踪研究数据,预测研究生的就业状况。采用Boruta和Extra Tree Classifier (ETC)等特征选择方法识别最优特征集,并以滑动窗口算法为指导进行自动特征选择。模型的训练分为四个阶段:1)不进行特征选择或词嵌入的原始数据集,2)选择最优特征的数据集,3)进行词嵌入的转换数据,4)在词嵌入前后同时进行特征选择的转换数据。基线模型(没有特征选择和嵌入)在人工神经网络模型中达到了最高的准确率(79%)。随后,将ETC应用于特征选择提高了精度,CatBoost达到了83%。基于BERT的嵌入的进一步转换使用BERT分类器将最高准确率提高到85%。最后,结合BERT-Boruta模型,在嵌入前后进行特征选择,获得了88%的最优准确率。本研究的结果表明,结合BERT嵌入的双阶段特征选择方法显著提高了分类精度。这凸显了法学硕士在转换表格数据以增强毕业生就业预测方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
16.80
自引率
0.00%
发文量
66
审稿时长
50 days
期刊最新文献
Conversational AI in children's home literacy learning: effectiveness, advantages, challenges, and family perception Enhancing AI literacy for educators: Where to start and to what end? Artificial intelligence literacy at school: A systematic review with a focus on psychological foundations Large language models for education: An open-source paradigm for automated Q&A in the graduate classroom Generative AI in higher education: A bibliometric review of emerging trends, power dynamics, and global research landscapes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1