Jonas Wachinger, Kate Bärnighausen, Louis N Schäfer, Kerry Scott, Shannon A McMahon
{"title":"提示、珍珠、不完美:比较定性数据分析中的 ChatGPT 和人类研究员。","authors":"Jonas Wachinger, Kate Bärnighausen, Louis N Schäfer, Kerry Scott, Shannon A McMahon","doi":"10.1177/10497323241244669","DOIUrl":null,"url":null,"abstract":"<p><p>The impact of ChatGPT and other large language model-based applications on scientific work is being debated across contexts and disciplines. However, despite ChatGPT's inherent focus on language generation and processing, insights regarding its potential for supporting qualitative research and analysis remain limited. In this article, we advocate for an open discourse on chances and pitfalls of AI-supported qualitative analysis by exploring ChatGPT's performance when analyzing an interview transcript based on various prompts and comparing results to those derived by an experienced human researcher. Themes identified by the human researcher and ChatGPT across analytic prompts overlapped to a considerable degree, with ChatGPT leaning toward descriptive themes but also identifying more nuanced dynamics (e.g., 'trust and responsibility' and 'acceptance and resistance'). ChatGPT was able to propose a codebook and key quotes from the transcript which had considerable face validity but would require careful review. When prompted to embed findings into broader theoretical discourses, ChatGPT could convincingly argue how identified themes linked to the provided theories, even in cases of (seemingly) unfitting models. In general, despite challenges, ChatGPT performed better than we had expected, especially on identifying themes which generally overlapped with those of an experienced researcher, and when embedding these themes into specific theoretical debates. Based on our results, we discuss several ideas on how ChatGPT could contribute to but also challenge established best-practice approaches for rigorous and nuanced qualitative research and teaching.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prompts, Pearls, Imperfections: Comparing ChatGPT and a Human Researcher in Qualitative Data Analysis.\",\"authors\":\"Jonas Wachinger, Kate Bärnighausen, Louis N Schäfer, Kerry Scott, Shannon A McMahon\",\"doi\":\"10.1177/10497323241244669\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The impact of ChatGPT and other large language model-based applications on scientific work is being debated across contexts and disciplines. However, despite ChatGPT's inherent focus on language generation and processing, insights regarding its potential for supporting qualitative research and analysis remain limited. In this article, we advocate for an open discourse on chances and pitfalls of AI-supported qualitative analysis by exploring ChatGPT's performance when analyzing an interview transcript based on various prompts and comparing results to those derived by an experienced human researcher. Themes identified by the human researcher and ChatGPT across analytic prompts overlapped to a considerable degree, with ChatGPT leaning toward descriptive themes but also identifying more nuanced dynamics (e.g., 'trust and responsibility' and 'acceptance and resistance'). ChatGPT was able to propose a codebook and key quotes from the transcript which had considerable face validity but would require careful review. When prompted to embed findings into broader theoretical discourses, ChatGPT could convincingly argue how identified themes linked to the provided theories, even in cases of (seemingly) unfitting models. In general, despite challenges, ChatGPT performed better than we had expected, especially on identifying themes which generally overlapped with those of an experienced researcher, and when embedding these themes into specific theoretical debates. Based on our results, we discuss several ideas on how ChatGPT could contribute to but also challenge established best-practice approaches for rigorous and nuanced qualitative research and teaching.</p>\",\"PeriodicalId\":48437,\"journal\":{\"name\":\"Qualitative Health Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Qualitative Health Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/10497323241244669\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Qualitative Health Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/10497323241244669","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Prompts, Pearls, Imperfections: Comparing ChatGPT and a Human Researcher in Qualitative Data Analysis.
The impact of ChatGPT and other large language model-based applications on scientific work is being debated across contexts and disciplines. However, despite ChatGPT's inherent focus on language generation and processing, insights regarding its potential for supporting qualitative research and analysis remain limited. In this article, we advocate for an open discourse on chances and pitfalls of AI-supported qualitative analysis by exploring ChatGPT's performance when analyzing an interview transcript based on various prompts and comparing results to those derived by an experienced human researcher. Themes identified by the human researcher and ChatGPT across analytic prompts overlapped to a considerable degree, with ChatGPT leaning toward descriptive themes but also identifying more nuanced dynamics (e.g., 'trust and responsibility' and 'acceptance and resistance'). ChatGPT was able to propose a codebook and key quotes from the transcript which had considerable face validity but would require careful review. When prompted to embed findings into broader theoretical discourses, ChatGPT could convincingly argue how identified themes linked to the provided theories, even in cases of (seemingly) unfitting models. In general, despite challenges, ChatGPT performed better than we had expected, especially on identifying themes which generally overlapped with those of an experienced researcher, and when embedding these themes into specific theoretical debates. Based on our results, we discuss several ideas on how ChatGPT could contribute to but also challenge established best-practice approaches for rigorous and nuanced qualitative research and teaching.
期刊介绍:
QUALITATIVE HEALTH RESEARCH is an international, interdisciplinary, refereed journal for the enhancement of health care and to further the development and understanding of qualitative research methods in health care settings. We welcome manuscripts in the following areas: the description and analysis of the illness experience, health and health-seeking behaviors, the experiences of caregivers, the sociocultural organization of health care, health care policy, and related topics. We also seek critical reviews and commentaries addressing conceptual, theoretical, methodological, and ethical issues pertaining to qualitative enquiry.