Mei Hui Adeline Kon, Michelle Jessica Pereira, Joseph Antonio De Castro Molina, Vivien Cherng Hui Yip, John Arputhan Abisheganaden, WanFen Yip
{"title":"Unravelling ChatGPT's potential in summarising qualitative in-depth interviews.","authors":"Mei Hui Adeline Kon, Michelle Jessica Pereira, Joseph Antonio De Castro Molina, Vivien Cherng Hui Yip, John Arputhan Abisheganaden, WanFen Yip","doi":"10.1038/s41433-024-03419-0","DOIUrl":null,"url":null,"abstract":"<p><strong>Background/objectives: </strong>Qualitative research can be laborious and time consuming, presenting a challenge for practitioners and policymakers seeking rapid, actionable results. Data collection, transcription and analysis are the main contributors to the resource-intensive nature. OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), have demonstrated potential to aid in data analysis. Our study aimed to compare themes generated by ChatGPT (3.5 and 4.0) with traditional human analysis from in-depth interviews.</p><p><strong>Methods: </strong>Three transcripts from an evaluation study to understand patients' experiences at a community eye clinic were used. Transcripts were first analysed by an independent researcher. Next, specific aims, instructions and de-identified transcripts were uploaded to ChatGPT 3.5 and ChatGPT 4.0. Concordance in the themes was calculated as the number of themes generated by ChatGPT divided by the number of themes generated by the researcher. The number of unrelated subthemes and time taken by both ChatGPT were also described.</p><p><strong>Results: </strong>The average time taken per transcript was 11.5 min, 11.9 min and 240 min for ChatGPT 3.5, ChatGPT 4.0 and researcher respectively. Six themes were identified by the researcher: (i) clinic's accessibility, (ii) patients' awareness, (iii) trust and satisfaction, (iv) patients' expectations, (v) willingness to return and (vi) explanation of the clinic by referral source. Concordance for ChatGPT 3.5 and 4.0 ranged from 66 to 100%.</p><p><strong>Conclusion: </strong>Preliminary results showed that ChatGPT significantly reduced analysis time with moderate to good concordance compared with current practice. This highlighted the potential adoption of ChatGPT to facilitate rapid preliminary analysis. However, regrouping of subthemes will still need to be conducted by a researcher.</p>","PeriodicalId":12125,"journal":{"name":"Eye","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eye","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41433-024-03419-0","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background/objectives: Qualitative research can be laborious and time consuming, presenting a challenge for practitioners and policymakers seeking rapid, actionable results. Data collection, transcription and analysis are the main contributors to the resource-intensive nature. OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), have demonstrated potential to aid in data analysis. Our study aimed to compare themes generated by ChatGPT (3.5 and 4.0) with traditional human analysis from in-depth interviews.
Methods: Three transcripts from an evaluation study to understand patients' experiences at a community eye clinic were used. Transcripts were first analysed by an independent researcher. Next, specific aims, instructions and de-identified transcripts were uploaded to ChatGPT 3.5 and ChatGPT 4.0. Concordance in the themes was calculated as the number of themes generated by ChatGPT divided by the number of themes generated by the researcher. The number of unrelated subthemes and time taken by both ChatGPT were also described.
Results: The average time taken per transcript was 11.5 min, 11.9 min and 240 min for ChatGPT 3.5, ChatGPT 4.0 and researcher respectively. Six themes were identified by the researcher: (i) clinic's accessibility, (ii) patients' awareness, (iii) trust and satisfaction, (iv) patients' expectations, (v) willingness to return and (vi) explanation of the clinic by referral source. Concordance for ChatGPT 3.5 and 4.0 ranged from 66 to 100%.
Conclusion: Preliminary results showed that ChatGPT significantly reduced analysis time with moderate to good concordance compared with current practice. This highlighted the potential adoption of ChatGPT to facilitate rapid preliminary analysis. However, regrouping of subthemes will still need to be conducted by a researcher.
期刊介绍:
Eye seeks to provide the international practising ophthalmologist with high quality articles, of academic rigour, on the latest global clinical and laboratory based research. Its core aim is to advance the science and practice of ophthalmology with the latest clinical- and scientific-based research. Whilst principally aimed at the practising clinician, the journal contains material of interest to a wider readership including optometrists, orthoptists, other health care professionals and research workers in all aspects of the field of visual science worldwide. Eye is the official journal of The Royal College of Ophthalmologists.
Eye encourages the submission of original articles covering all aspects of ophthalmology including: external eye disease; oculo-plastic surgery; orbital and lacrimal disease; ocular surface and corneal disorders; paediatric ophthalmology and strabismus; glaucoma; medical and surgical retina; neuro-ophthalmology; cataract and refractive surgery; ocular oncology; ophthalmic pathology; ophthalmic genetics.