{"title":"What if ChatGPT generates quantitative research data? A case study in tourism","authors":"Serhat Adem Sop, Doğa Kurçer","doi":"10.1108/jhtt-08-2023-0237","DOIUrl":null,"url":null,"abstract":"\nPurpose\nThis study aims to explore whether Chat Generative Pre-training Transformer (ChatGPT) can produce quantitative data sets for researchers who could behave unethically through data fabrication.\n\n\nDesign/methodology/approach\nA two-stage case study related to the field of tourism was conducted, and ChatGPT (v.3.5.) was asked to respond to the first questionnaire on behalf of 400 participants and the second on behalf of 800 participants. The artificial intelligence (AI)-generated data sets’ quality was statistically tested via descriptive statistics, correlation analysis, exploratory factor analysis, confirmatory factor analysis and Harman's single-factor test.\n\n\nFindings\nThe results revealed that ChatGPT could respond to the questionnaires as the number of participants at the desired sample size level and could present the generated data sets in a table format ready for analysis. It was also observed that ChatGPT's responses were systematical, and it created a statistically ideal data set. However, it was noted that the data produced high correlations among the observed variables, the measurement model did not achieve sufficient goodness of fit and the issue of common method bias emerged. The conclusion reached is that ChatGPT does not or cannot yet generate data of suitable quality for advanced-level statistical analyses.\n\n\nOriginality/value\nThis study shows that ChatGPT can provide quantitative data to researchers attempting to fabricate data sets unethically. Therefore, it offers a new and significant argument to the ongoing debates about the unethical use of ChatGPT. Besides, a quantitative data set generated by AI was statistically examined for the first time in this study. The results proved that the data produced by ChatGPT is problematic in certain aspects, shedding light on several points that journal editors should consider during the editorial processes.\n","PeriodicalId":5,"journal":{"name":"ACS Applied Materials & Interfaces","volume":"8 2","pages":""},"PeriodicalIF":8.2000,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Materials & Interfaces","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1108/jhtt-08-2023-0237","RegionNum":2,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose
This study aims to explore whether Chat Generative Pre-training Transformer (ChatGPT) can produce quantitative data sets for researchers who could behave unethically through data fabrication.
Design/methodology/approach
A two-stage case study related to the field of tourism was conducted, and ChatGPT (v.3.5.) was asked to respond to the first questionnaire on behalf of 400 participants and the second on behalf of 800 participants. The artificial intelligence (AI)-generated data sets’ quality was statistically tested via descriptive statistics, correlation analysis, exploratory factor analysis, confirmatory factor analysis and Harman's single-factor test.
Findings
The results revealed that ChatGPT could respond to the questionnaires as the number of participants at the desired sample size level and could present the generated data sets in a table format ready for analysis. It was also observed that ChatGPT's responses were systematical, and it created a statistically ideal data set. However, it was noted that the data produced high correlations among the observed variables, the measurement model did not achieve sufficient goodness of fit and the issue of common method bias emerged. The conclusion reached is that ChatGPT does not or cannot yet generate data of suitable quality for advanced-level statistical analyses.
Originality/value
This study shows that ChatGPT can provide quantitative data to researchers attempting to fabricate data sets unethically. Therefore, it offers a new and significant argument to the ongoing debates about the unethical use of ChatGPT. Besides, a quantitative data set generated by AI was statistically examined for the first time in this study. The results proved that the data produced by ChatGPT is problematic in certain aspects, shedding light on several points that journal editors should consider during the editorial processes.
期刊介绍:
ACS Applied Materials & Interfaces is a leading interdisciplinary journal that brings together chemists, engineers, physicists, and biologists to explore the development and utilization of newly-discovered materials and interfacial processes for specific applications. Our journal has experienced remarkable growth since its establishment in 2009, both in terms of the number of articles published and the impact of the research showcased. We are proud to foster a truly global community, with the majority of published articles originating from outside the United States, reflecting the rapid growth of applied research worldwide.