{"title":"DSBench:数据科学代理离成为数据科学专家还有多远?","authors":"Liqiang Jing, Zhehui Huang, Xiaoyang Wang, Wenlin Yao, Wenhao Yu, Kaixin Ma, Hongming Zhang, Xinya Du, Dong Yu","doi":"arxiv-2409.07703","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have\ndemonstrated impressive language/vision reasoning abilities, igniting the\nrecent trend of building agents for targeted applications such as shopping\nassistants or AI software engineers. Recently, many data science benchmarks\nhave been proposed to investigate their performance in the data science domain.\nHowever, existing data science benchmarks still fall short when compared to\nreal-world data science applications due to their simplified settings. To\nbridge this gap, we introduce DSBench, a comprehensive benchmark designed to\nevaluate data science agents with realistic tasks. This benchmark includes 466\ndata analysis tasks and 74 data modeling tasks, sourced from Eloquence and\nKaggle competitions. DSBench offers a realistic setting by encompassing long\ncontexts, multimodal task backgrounds, reasoning with large data files and\nmulti-table structures, and performing end-to-end data modeling tasks. Our\nevaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle\nwith most tasks, with the best agent solving only 34.12% of data analysis tasks\nand achieving a 34.74% Relative Performance Gap (RPG). These findings\nunderscore the need for further advancements in developing more practical,\nintelligent, and autonomous data science agents.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?\",\"authors\":\"Liqiang Jing, Zhehui Huang, Xiaoyang Wang, Wenlin Yao, Wenhao Yu, Kaixin Ma, Hongming Zhang, Xinya Du, Dong Yu\",\"doi\":\"arxiv-2409.07703\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have\\ndemonstrated impressive language/vision reasoning abilities, igniting the\\nrecent trend of building agents for targeted applications such as shopping\\nassistants or AI software engineers. Recently, many data science benchmarks\\nhave been proposed to investigate their performance in the data science domain.\\nHowever, existing data science benchmarks still fall short when compared to\\nreal-world data science applications due to their simplified settings. To\\nbridge this gap, we introduce DSBench, a comprehensive benchmark designed to\\nevaluate data science agents with realistic tasks. This benchmark includes 466\\ndata analysis tasks and 74 data modeling tasks, sourced from Eloquence and\\nKaggle competitions. DSBench offers a realistic setting by encompassing long\\ncontexts, multimodal task backgrounds, reasoning with large data files and\\nmulti-table structures, and performing end-to-end data modeling tasks. Our\\nevaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle\\nwith most tasks, with the best agent solving only 34.12% of data analysis tasks\\nand achieving a 34.74% Relative Performance Gap (RPG). These findings\\nunderscore the need for further advancements in developing more practical,\\nintelligent, and autonomous data science agents.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07703\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07703","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?
Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have
demonstrated impressive language/vision reasoning abilities, igniting the
recent trend of building agents for targeted applications such as shopping
assistants or AI software engineers. Recently, many data science benchmarks
have been proposed to investigate their performance in the data science domain.
However, existing data science benchmarks still fall short when compared to
real-world data science applications due to their simplified settings. To
bridge this gap, we introduce DSBench, a comprehensive benchmark designed to
evaluate data science agents with realistic tasks. This benchmark includes 466
data analysis tasks and 74 data modeling tasks, sourced from Eloquence and
Kaggle competitions. DSBench offers a realistic setting by encompassing long
contexts, multimodal task backgrounds, reasoning with large data files and
multi-table structures, and performing end-to-end data modeling tasks. Our
evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle
with most tasks, with the best agent solving only 34.12% of data analysis tasks
and achieving a 34.74% Relative Performance Gap (RPG). These findings
underscore the need for further advancements in developing more practical,
intelligent, and autonomous data science agents.