Radiah Haque, Hui-Ngo Goh, Choo-Yee Ting, Albert Quek, M.D. Rakibul Hasan
{"title":"Leveraging LLMs for optimised feature selection and embedding in structured data: A case study on graduate employment classification","authors":"Radiah Haque, Hui-Ngo Goh, Choo-Yee Ting, Albert Quek, M.D. Rakibul Hasan","doi":"10.1016/j.caeai.2024.100356","DOIUrl":null,"url":null,"abstract":"<div><div>The application of Machine Learning (ML) for predicting graduate student employability is a growing area of research, driven by the need to align educational outcomes with job market requirements. In this context, this paper investigates the application of Large Language Models (LLMs) for tabular data transformation and embedding, specifically using Bidirectional Encoder Representations from Transformers (BERT), to enhance the performance of ML models in binary classification tasks for student employability prediction. The primary objective is to determine whether converting structured data into text format improves model accuracy. The study involves several ML models including Artificial Neural Networks (ANN), CatBoost, and BERT classifier. The focus is on predicting the employment status of graduate students based on demographic, academic, and graduate tracer study data, collected from over 4000 university graduates. Feature selection methods, including Boruta and Extra Tree Classifier (ETC) are employed to identify the optimal feature set, guided by a sliding window algorithm for automatic feature selection. The models are trained in four stages: 1) original dataset without feature selection or word embedding, 2) dataset with selected optimal features, 3) transformed data with word embedding, and 4) transformed data with feature selection applied both before and after word embedding. The baseline model (without feature selection and embedding) achieved the highest accuracy with the ANN model (79%). Subsequently, applying ETC for feature selection improved accuracy, with CatBoost achieving 83%. Further transformation with BERT-based embeddings raised the highest accuracy to 85% using the BERT classifier. Finally, the optimal accuracy of 88% was obtained by applying feature selection before and after embedding, with the BERT-Boruta model. The findings from this study demonstrate that using the dual-stage feature selection approach in combination with BERT embedding significantly increases the classification accuracy. This highlights the potential of LLMs in transforming tabular data for enhanced graduate employment prediction.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100356"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X24001590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
The application of Machine Learning (ML) for predicting graduate student employability is a growing area of research, driven by the need to align educational outcomes with job market requirements. In this context, this paper investigates the application of Large Language Models (LLMs) for tabular data transformation and embedding, specifically using Bidirectional Encoder Representations from Transformers (BERT), to enhance the performance of ML models in binary classification tasks for student employability prediction. The primary objective is to determine whether converting structured data into text format improves model accuracy. The study involves several ML models including Artificial Neural Networks (ANN), CatBoost, and BERT classifier. The focus is on predicting the employment status of graduate students based on demographic, academic, and graduate tracer study data, collected from over 4000 university graduates. Feature selection methods, including Boruta and Extra Tree Classifier (ETC) are employed to identify the optimal feature set, guided by a sliding window algorithm for automatic feature selection. The models are trained in four stages: 1) original dataset without feature selection or word embedding, 2) dataset with selected optimal features, 3) transformed data with word embedding, and 4) transformed data with feature selection applied both before and after word embedding. The baseline model (without feature selection and embedding) achieved the highest accuracy with the ANN model (79%). Subsequently, applying ETC for feature selection improved accuracy, with CatBoost achieving 83%. Further transformation with BERT-based embeddings raised the highest accuracy to 85% using the BERT classifier. Finally, the optimal accuracy of 88% was obtained by applying feature selection before and after embedding, with the BERT-Boruta model. The findings from this study demonstrate that using the dual-stage feature selection approach in combination with BERT embedding significantly increases the classification accuracy. This highlights the potential of LLMs in transforming tabular data for enhanced graduate employment prediction.
由于需要使教育成果与就业市场需求保持一致,机器学习(ML)用于预测研究生就业能力的应用是一个不断发展的研究领域。在此背景下,本文研究了大型语言模型(llm)在表格数据转换和嵌入中的应用,特别是使用来自变形器的双向编码器表示(BERT),以提高ML模型在用于学生就业能力预测的二元分类任务中的性能。主要目标是确定将结构化数据转换为文本格式是否可以提高模型的准确性。该研究涉及几个ML模型,包括人工神经网络(ANN)、CatBoost和BERT分类器。重点是根据从4000多名大学毕业生中收集的人口统计、学术和研究生跟踪研究数据,预测研究生的就业状况。采用Boruta和Extra Tree Classifier (ETC)等特征选择方法识别最优特征集,并以滑动窗口算法为指导进行自动特征选择。模型的训练分为四个阶段:1)不进行特征选择或词嵌入的原始数据集,2)选择最优特征的数据集,3)进行词嵌入的转换数据,4)在词嵌入前后同时进行特征选择的转换数据。基线模型(没有特征选择和嵌入)在人工神经网络模型中达到了最高的准确率(79%)。随后,将ETC应用于特征选择提高了精度,CatBoost达到了83%。基于BERT的嵌入的进一步转换使用BERT分类器将最高准确率提高到85%。最后,结合BERT-Boruta模型,在嵌入前后进行特征选择,获得了88%的最优准确率。本研究的结果表明,结合BERT嵌入的双阶段特征选择方法显著提高了分类精度。这凸显了法学硕士在转换表格数据以增强毕业生就业预测方面的潜力。