Evaluating Simple and Complex Models’ Performance When Predicting Accepted Answers on Stack Overflow

Osayande P. Omondiagbe, Sherlock A. Licorish, Stephen G. MacDonell
{"title":"Evaluating Simple and Complex Models’ Performance When Predicting Accepted Answers on Stack Overflow","authors":"Osayande P. Omondiagbe, Sherlock A. Licorish, Stephen G. MacDonell","doi":"10.1109/SEAA56994.2022.00014","DOIUrl":null,"url":null,"abstract":"Stack Overflow is used to solve programming issues during software development. Research efforts have looked to identify relevant content on this platform. In particular, researchers have proposed various modelling techniques to predict acceptable Stack Overflow answers. Less interest, however, has been dedicated to examining the performance and quality of typically used modelling methods with respect to the model and feature complexity. Such insights could be of practical significance to the many practitioners who develop models for Stack Overflow. This study examines the performance and quality of two modelling methods, of varying degree of complexity, used for predicting Java and JavaScript acceptable answers on Stack Overflow. Our dataset comprised 249,588 posts drawn from years 2014-2016. Outcomes reveal significant differences in models’ performances and quality given the type of features and complexity of models used. Researchers examining model performance and quality and feature complexity may leverage these findings in selecting suitable modelling approaches for Q&A prediction.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEAA56994.2022.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Stack Overflow is used to solve programming issues during software development. Research efforts have looked to identify relevant content on this platform. In particular, researchers have proposed various modelling techniques to predict acceptable Stack Overflow answers. Less interest, however, has been dedicated to examining the performance and quality of typically used modelling methods with respect to the model and feature complexity. Such insights could be of practical significance to the many practitioners who develop models for Stack Overflow. This study examines the performance and quality of two modelling methods, of varying degree of complexity, used for predicting Java and JavaScript acceptable answers on Stack Overflow. Our dataset comprised 249,588 posts drawn from years 2014-2016. Outcomes reveal significant differences in models’ performances and quality given the type of features and complexity of models used. Researchers examining model performance and quality and feature complexity may leverage these findings in selecting suitable modelling approaches for Q&A prediction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在堆栈溢出预测可接受答案时评估简单和复杂模型的性能
堆栈溢出用于解决软件开发过程中的编程问题。研究工作旨在确定该平台上的相关内容。特别是,研究人员提出了各种建模技术来预测可接受的堆栈溢出答案。然而,较少的兴趣被用于检查关于模型和特征复杂性的典型建模方法的性能和质量。这样的见解对于许多为Stack Overflow开发模型的实践者来说可能具有实际意义。本研究考察了两种不同复杂程度的建模方法的性能和质量,用于预测Java和JavaScript在堆栈溢出上的可接受答案。我们的数据集包括2014-2016年的249588个帖子。结果显示,给定的特征类型和模型的复杂性,模型的性能和质量存在显著差异。研究人员检查模型的性能、质量和特征复杂性,可以利用这些发现来选择合适的建模方法进行问答预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Service Classification through Machine Learning: Aiding in the Efficient Identification of Reusable Assets in Cloud Application Development Handling Environmental Uncertainty in Design Time Access Control Analysis How are software datasets constructed in Empirical Software Engineering studies? A systematic mapping study Microservices smell detection through dynamic analysis Towards Secure Agile Software Development Process: A Practice-Based Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1