Background/objectives: Qualitative research can be laborious and time consuming, presenting a challenge for practitioners and policymakers seeking rapid, actionable results. Data collection, transcription and analysis are the main contributors to the resource-intensive nature. OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), have demonstrated potential to aid in data analysis. Our study aimed to compare themes generated by ChatGPT (3.5 and 4.0) with traditional human analysis from in-depth interviews.
Methods: Three transcripts from an evaluation study to understand patients' experiences at a community eye clinic were used. Transcripts were first analysed by an independent researcher. Next, specific aims, instructions and de-identified transcripts were uploaded to ChatGPT 3.5 and ChatGPT 4.0. Concordance in the themes was calculated as the number of themes generated by ChatGPT divided by the number of themes generated by the researcher. The number of unrelated subthemes and time taken by both ChatGPT were also described.
Results: The average time taken per transcript was 11.5 min, 11.9 min and 240 min for ChatGPT 3.5, ChatGPT 4.0 and researcher respectively. Six themes were identified by the researcher: (i) clinic's accessibility, (ii) patients' awareness, (iii) trust and satisfaction, (iv) patients' expectations, (v) willingness to return and (vi) explanation of the clinic by referral source. Concordance for ChatGPT 3.5 and 4.0 ranged from 66 to 100%.
Conclusion: Preliminary results showed that ChatGPT significantly reduced analysis time with moderate to good concordance compared with current practice. This highlighted the potential adoption of ChatGPT to facilitate rapid preliminary analysis. However, regrouping of subthemes will still need to be conducted by a researcher.