{"title":"Watson vs.BARD vs.ChatGPT:The Jeopardy的分析!挑战","authors":"Daniel E. O'Leary","doi":"10.1002/aaai.12118","DOIUrl":null,"url":null,"abstract":"<p>The recently released BARD and ChatGPT have generated substantial interest from a range of researchers and institutions concerned about the impact on education, medicine, law and more. This paper uses questions from the Watson Jeopardy! Challenge to compare BARD, ChatGPT, and Watson. Using those, Jeopardy! questions, we find that for high confidence Watson questions the three systems perform with similar accuracy as Watson. We also find that both BARD and ChatGPT perform with the accuracy of a human expert and that the sets of their correct answers are rated highly similar using a Tanimoto similarity score. However, in addition, we find that both systems can change their solutions to the same input information on subsequent uses. When given the same Jeopardy! category and question multiple times, both BARD and ChatGPT can generate different and conflicting answers. As a result, the paper examines the characteristics of some of those questions that generate different answers to the same inputs. Finally, the paper discusses some of the implications of finding the different answers and the impact of the lack of reproducibility on testing such systems.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"44 3","pages":"282-295"},"PeriodicalIF":2.5000,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12118","citationCount":"1","resultStr":"{\"title\":\"An analysis of Watson vs. BARD vs. ChatGPT: The Jeopardy! Challenge\",\"authors\":\"Daniel E. O'Leary\",\"doi\":\"10.1002/aaai.12118\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The recently released BARD and ChatGPT have generated substantial interest from a range of researchers and institutions concerned about the impact on education, medicine, law and more. This paper uses questions from the Watson Jeopardy! Challenge to compare BARD, ChatGPT, and Watson. Using those, Jeopardy! questions, we find that for high confidence Watson questions the three systems perform with similar accuracy as Watson. We also find that both BARD and ChatGPT perform with the accuracy of a human expert and that the sets of their correct answers are rated highly similar using a Tanimoto similarity score. However, in addition, we find that both systems can change their solutions to the same input information on subsequent uses. When given the same Jeopardy! category and question multiple times, both BARD and ChatGPT can generate different and conflicting answers. As a result, the paper examines the characteristics of some of those questions that generate different answers to the same inputs. Finally, the paper discusses some of the implications of finding the different answers and the impact of the lack of reproducibility on testing such systems.</p>\",\"PeriodicalId\":7854,\"journal\":{\"name\":\"Ai Magazine\",\"volume\":\"44 3\",\"pages\":\"282-295\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12118\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ai Magazine\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12118\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12118","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
An analysis of Watson vs. BARD vs. ChatGPT: The Jeopardy! Challenge
The recently released BARD and ChatGPT have generated substantial interest from a range of researchers and institutions concerned about the impact on education, medicine, law and more. This paper uses questions from the Watson Jeopardy! Challenge to compare BARD, ChatGPT, and Watson. Using those, Jeopardy! questions, we find that for high confidence Watson questions the three systems perform with similar accuracy as Watson. We also find that both BARD and ChatGPT perform with the accuracy of a human expert and that the sets of their correct answers are rated highly similar using a Tanimoto similarity score. However, in addition, we find that both systems can change their solutions to the same input information on subsequent uses. When given the same Jeopardy! category and question multiple times, both BARD and ChatGPT can generate different and conflicting answers. As a result, the paper examines the characteristics of some of those questions that generate different answers to the same inputs. Finally, the paper discusses some of the implications of finding the different answers and the impact of the lack of reproducibility on testing such systems.
期刊介绍:
AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.