Are Large Language Models a Threat to Programming Platforms? An Exploratory Study

Md Mustakim Billah, Palash Ranjan Roy, Zadia Codabux, Banani Roy
{"title":"Are Large Language Models a Threat to Programming Platforms? An Exploratory Study","authors":"Md Mustakim Billah, Palash Ranjan Roy, Zadia Codabux, Banani Roy","doi":"arxiv-2409.05824","DOIUrl":null,"url":null,"abstract":"Competitive programming platforms like LeetCode, Codeforces, and HackerRank\nevaluate programming skills, often used by recruiters for screening. With the\nrise of advanced Large Language Models (LLMs) such as ChatGPT, Gemini, and Meta\nAI, their problem-solving ability on these platforms needs assessment. This\nstudy explores LLMs' ability to tackle diverse programming challenges across\nplatforms with varying difficulty, offering insights into their real-time and\noffline performance and comparing them with human programmers. We tested 98 problems from LeetCode, 126 from Codeforces, covering 15\ncategories. Nine online contests from Codeforces and LeetCode were conducted,\nalong with two certification tests on HackerRank, to assess real-time\nperformance. Prompts and feedback mechanisms were used to guide LLMs, and\ncorrelations were explored across different scenarios. LLMs, like ChatGPT (71.43% success on LeetCode), excelled in LeetCode and\nHackerRank certifications but struggled in virtual contests, particularly on\nCodeforces. They performed better than users in LeetCode archives, excelling in\ntime and memory efficiency but underperforming in harder Codeforces contests.\nWhile not immediately threatening, LLMs performance on these platforms is\nconcerning, and future improvements will need addressing.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05824","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Competitive programming platforms like LeetCode, Codeforces, and HackerRank evaluate programming skills, often used by recruiters for screening. With the rise of advanced Large Language Models (LLMs) such as ChatGPT, Gemini, and Meta AI, their problem-solving ability on these platforms needs assessment. This study explores LLMs' ability to tackle diverse programming challenges across platforms with varying difficulty, offering insights into their real-time and offline performance and comparing them with human programmers. We tested 98 problems from LeetCode, 126 from Codeforces, covering 15 categories. Nine online contests from Codeforces and LeetCode were conducted, along with two certification tests on HackerRank, to assess real-time performance. Prompts and feedback mechanisms were used to guide LLMs, and correlations were explored across different scenarios. LLMs, like ChatGPT (71.43% success on LeetCode), excelled in LeetCode and HackerRank certifications but struggled in virtual contests, particularly on Codeforces. They performed better than users in LeetCode archives, excelling in time and memory efficiency but underperforming in harder Codeforces contests. While not immediately threatening, LLMs performance on these platforms is concerning, and future improvements will need addressing.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型语言模型会威胁编程平台吗?一项探索性研究
LeetCode、Codeforces 和 HackerRanke 等竞争性编程平台对编程技能进行评估,经常被招聘人员用于筛选。随着 ChatGPT、Gemini 和 MetaAI 等高级大语言模型(LLM)的出现,需要对这些平台上的问题解决能力进行评估。本研究探讨了 LLM 在不同平台上应对不同难度编程挑战的能力,深入了解了它们的实时和离线性能,并将它们与人类程序员进行了比较。我们测试了来自 LeetCode 的 98 个问题和来自 Codeforces 的 126 个问题,涵盖 15 个类别。我们在 Codeforces 和 LeetCode 上进行了九次在线竞赛,并在 HackerRank 上进行了两次认证测试,以评估实时性能。我们使用提示和反馈机制来指导 LLM,并探索了不同场景下的相关性。LLMs 和 ChatGPT(在 LeetCode 上的成功率为 71.43%)一样,在 LeetCode 和 HackerRank 认证中表现出色,但在虚拟竞赛中,尤其是在 Codeforces 上,却表现吃力。他们在 LeetCode 存档中的表现优于用户,在时间和内存效率方面表现出色,但在难度较高的 Codeforces 竞赛中表现不佳。LLMs 在这些平台上的表现虽然不会立即构成威胁,但令人担忧,需要在未来加以改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization Shannon Entropy is better Feature than Category and Sentiment in User Feedback Processing Motivations, Challenges, Best Practices, and Benefits for Bots and Conversational Agents in Software Engineering: A Multivocal Literature Review A Taxonomy of Self-Admitted Technical Debt in Deep Learning Systems Investigating team maturity in an agile automotive reorganization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1