Md Mustakim Billah, Palash Ranjan Roy, Zadia Codabux, Banani Roy
{"title":"Are Large Language Models a Threat to Programming Platforms? An Exploratory Study","authors":"Md Mustakim Billah, Palash Ranjan Roy, Zadia Codabux, Banani Roy","doi":"arxiv-2409.05824","DOIUrl":null,"url":null,"abstract":"Competitive programming platforms like LeetCode, Codeforces, and HackerRank\nevaluate programming skills, often used by recruiters for screening. With the\nrise of advanced Large Language Models (LLMs) such as ChatGPT, Gemini, and Meta\nAI, their problem-solving ability on these platforms needs assessment. This\nstudy explores LLMs' ability to tackle diverse programming challenges across\nplatforms with varying difficulty, offering insights into their real-time and\noffline performance and comparing them with human programmers. We tested 98 problems from LeetCode, 126 from Codeforces, covering 15\ncategories. Nine online contests from Codeforces and LeetCode were conducted,\nalong with two certification tests on HackerRank, to assess real-time\nperformance. Prompts and feedback mechanisms were used to guide LLMs, and\ncorrelations were explored across different scenarios. LLMs, like ChatGPT (71.43% success on LeetCode), excelled in LeetCode and\nHackerRank certifications but struggled in virtual contests, particularly on\nCodeforces. They performed better than users in LeetCode archives, excelling in\ntime and memory efficiency but underperforming in harder Codeforces contests.\nWhile not immediately threatening, LLMs performance on these platforms is\nconcerning, and future improvements will need addressing.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05824","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Competitive programming platforms like LeetCode, Codeforces, and HackerRank
evaluate programming skills, often used by recruiters for screening. With the
rise of advanced Large Language Models (LLMs) such as ChatGPT, Gemini, and Meta
AI, their problem-solving ability on these platforms needs assessment. This
study explores LLMs' ability to tackle diverse programming challenges across
platforms with varying difficulty, offering insights into their real-time and
offline performance and comparing them with human programmers. We tested 98 problems from LeetCode, 126 from Codeforces, covering 15
categories. Nine online contests from Codeforces and LeetCode were conducted,
along with two certification tests on HackerRank, to assess real-time
performance. Prompts and feedback mechanisms were used to guide LLMs, and
correlations were explored across different scenarios. LLMs, like ChatGPT (71.43% success on LeetCode), excelled in LeetCode and
HackerRank certifications but struggled in virtual contests, particularly on
Codeforces. They performed better than users in LeetCode archives, excelling in
time and memory efficiency but underperforming in harder Codeforces contests.
While not immediately threatening, LLMs performance on these platforms is
concerning, and future improvements will need addressing.