Comparative Evaluation and Comprehensive Analysis of Machine Learning Models for Regression Problems

IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Data Intelligence Pub Date : 2022-07-01 DOI:10.1162/dint_a_00155
Boran Sekerogiu, Y. K. Ever, Kamil Dimililer, F. Al-turjman
{"title":"Comparative Evaluation and Comprehensive Analysis of Machine Learning Models for Regression Problems","authors":"Boran Sekerogiu, Y. K. Ever, Kamil Dimililer, F. Al-turjman","doi":"10.1162/dint_a_00155","DOIUrl":null,"url":null,"abstract":"Abstract Artificial intelligence and machine learning applications are of significant importance almost in every field of human life to solve problems or support human experts. However, the determination of the machine learning model to achieve a superior result for a particular problem within the wide real-life application areas is still a challenging task for researchers. The success of a model could be affected by several factors such as dataset characteristics, training strategy and model responses. Therefore, a comprehensive analysis is required to determine model ability and the efficiency of the considered strategies. This study implemented ten benchmark machine learning models on seventeen varied datasets. Experiments are performed using four different training strategies 60:40, 70:30, and 80:20 hold-out and five-fold cross-validation techniques. We used three evaluation metrics to evaluate the experimental results: mean squared error, mean absolute error, and coefficient of determination (R2 score). The considered models are analyzed, and each model's advantages, disadvantages, and data dependencies are indicated. As a result of performed excess number of experiments, the deep Long-Short Term Memory (LSTM) neural network outperformed other considered models, namely, decision tree, linear regression, support vector regression with a linear and radial basis function kernels, random forest, gradient boosting, extreme gradient boosting, shallow neural network, and deep neural network. It has also been shown that cross-validation has a tremendous impact on the results of the experiments and should be considered for the model evaluation in regression studies where data mining or selection is not performed.","PeriodicalId":34023,"journal":{"name":"Data Intelligence","volume":"4 1","pages":"620-652"},"PeriodicalIF":1.3000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/dint_a_00155","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 6

Abstract

Abstract Artificial intelligence and machine learning applications are of significant importance almost in every field of human life to solve problems or support human experts. However, the determination of the machine learning model to achieve a superior result for a particular problem within the wide real-life application areas is still a challenging task for researchers. The success of a model could be affected by several factors such as dataset characteristics, training strategy and model responses. Therefore, a comprehensive analysis is required to determine model ability and the efficiency of the considered strategies. This study implemented ten benchmark machine learning models on seventeen varied datasets. Experiments are performed using four different training strategies 60:40, 70:30, and 80:20 hold-out and five-fold cross-validation techniques. We used three evaluation metrics to evaluate the experimental results: mean squared error, mean absolute error, and coefficient of determination (R2 score). The considered models are analyzed, and each model's advantages, disadvantages, and data dependencies are indicated. As a result of performed excess number of experiments, the deep Long-Short Term Memory (LSTM) neural network outperformed other considered models, namely, decision tree, linear regression, support vector regression with a linear and radial basis function kernels, random forest, gradient boosting, extreme gradient boosting, shallow neural network, and deep neural network. It has also been shown that cross-validation has a tremendous impact on the results of the experiments and should be considered for the model evaluation in regression studies where data mining or selection is not performed.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
回归问题机器学习模型的比较评价与综合分析
摘要人工智能和机器学习应用几乎在人类生活的每个领域都具有重要意义,可以解决问题或支持人类专家。然而,对于研究人员来说,确定机器学习模型以在广泛的现实应用领域中为特定问题实现卓越的结果仍然是一项具有挑战性的任务。模型的成功可能受到几个因素的影响,如数据集特征、训练策略和模型响应。因此,需要进行全面分析,以确定模型能力和所考虑策略的效率。本研究在17个不同的数据集上实现了10个基准机器学习模型。实验使用四种不同的训练策略60:40、70:30和80:20保持和五倍交叉验证技术进行。我们使用三个评估指标来评估实验结果:均方误差、平均绝对误差和决定系数(R2分数)。分析了所考虑的模型,并指出了每个模型的优点、缺点和数据相关性。由于进行了过多的实验,深度长短期记忆(LSTM)神经网络的性能优于其他考虑的模型,即决策树、线性回归、具有线性和径向基函数核的支持向量回归、随机森林、梯度增强、极端梯度增强、浅层神经网络和深度神经网络。研究还表明,交叉验证对实验结果有着巨大的影响,在不进行数据挖掘或选择的回归研究中,应将其考虑用于模型评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Data Intelligence
Data Intelligence COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
6.50
自引率
15.40%
发文量
40
审稿时长
8 weeks
期刊最新文献
The Limitations and Ethical Considerations of ChatGPT Rule Mining Trends from 1987 to 2022: A Bibliometric Analysis and Visualization Classification and quantification of timestamp data quality issues and its impact on data quality outcome BIKAS: Bio-Inspired Knowledge Acquisition and Simulacrum—A Knowledge Database to Support Multifunctional Design Concept Generation Exploring Attentive Siamese LSTM for Low-Resource Text Plagiarism Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1