Pub Date : 2026-03-01Epub Date: 2025-10-08DOI: 10.1177/10497323251375410
Lorien S Jordan, Paul G Sauberer, Jennifer R Wolgemuth
This paper contributes to ongoing conversations about the ethical and practical integration of generative artificial intelligence (GAI) in qualitative health research by focusing on an often-overlooked aspect of research-dissemination. Given GAI's capacity to translate complex ideas into accessible summaries, simplify jargon, adapt to different comprehension levels, and enhance understanding through analogies, we explore its potential to support knowledge translation. Specifically, we examine the use of GAI podcasts for public-facing dissemination. Drawing on our experience testing three GAI-assisted podcasting platforms-with features ranging from automated scriptwriting to audio production-we assess their affordances and limitations. Our experience with these platforms suggests that the effectiveness of GAI depends less on the tools themselves and more on how researchers critically engage with and shape their use. We conclude by emphasizing the importance of balancing artificial intelligence's promise of speed and reach with concerns about bias, mistrust, and limited artificial intelligence literacy-recognizing GAI as a partner, not a substitute, in meaningful communication.
{"title":"The Sound of Science: Exploring Generative AI Podcasts for Qualitative Health Research Translation.","authors":"Lorien S Jordan, Paul G Sauberer, Jennifer R Wolgemuth","doi":"10.1177/10497323251375410","DOIUrl":"10.1177/10497323251375410","url":null,"abstract":"<p><p>This paper contributes to ongoing conversations about the ethical and practical integration of generative artificial intelligence (GAI) in qualitative health research by focusing on an often-overlooked aspect of research-dissemination. Given GAI's capacity to translate complex ideas into accessible summaries, simplify jargon, adapt to different comprehension levels, and enhance understanding through analogies, we explore its potential to support knowledge translation. Specifically, we examine the use of GAI podcasts for public-facing dissemination. Drawing on our experience testing three GAI-assisted podcasting platforms-with features ranging from automated scriptwriting to audio production-we assess their affordances and limitations. Our experience with these platforms suggests that the effectiveness of GAI depends less on the tools themselves and more on how researchers critically engage with and shape their use. We conclude by emphasizing the importance of balancing artificial intelligence's promise of speed and reach with concerns about bias, mistrust, and limited artificial intelligence literacy-recognizing GAI as a partner, not a substitute, in meaningful communication.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"247-261"},"PeriodicalIF":2.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-03-04DOI: 10.1177/10497323261417532a
Johanna Creswell Báez, James Salvo, Jessica Nina Lester
The articles in this special issue explore the intersections between artificial intelligence (AI) and qualitative health research at a moment of rapid technological expansion and heightened methodological debate. The contributions engage AI not as something to be adopted or rejected but as a focus of critical inquiry that raises epistemological, methodological, and ethical questions for qualitative scholars. Across diverse perspectives, the articles foreground reflexivity, methodological development, and responsible approaches to AI use in clinical settings. The special issue adopts a "big-tent" approach, bringing together varied perspectives that are often in tension, yet productively in conversation. Published amid an accelerating AI hype cycle and increasing institutional pressures to adopt technological solutions, this collection affirms qualitative health research as a vital space for critical dialogue and methodological innovation. The contributions collectively center the interpretive and value-based commitments that have long defined qualitative inquiry, engaging with AI critically and reflexively rather than on its own terms.
{"title":"Addressing the Special Issue: Intersections (Existing, Emerging, and Imagined) Between Artificial Intelligence and Qualitative Health Research.","authors":"Johanna Creswell Báez, James Salvo, Jessica Nina Lester","doi":"10.1177/10497323261417532a","DOIUrl":"10.1177/10497323261417532a","url":null,"abstract":"<p><p>The articles in this special issue explore the intersections between artificial intelligence (AI) and qualitative health research at a moment of rapid technological expansion and heightened methodological debate. The contributions engage AI not as something to be adopted or rejected but as a focus of critical inquiry that raises epistemological, methodological, and ethical questions for qualitative scholars. Across diverse perspectives, the articles foreground reflexivity, methodological development, and responsible approaches to AI use in clinical settings. The special issue adopts a \"big-tent\" approach, bringing together varied perspectives that are often in tension, yet productively in conversation. Published amid an accelerating AI hype cycle and increasing institutional pressures to adopt technological solutions, this collection affirms qualitative health research as a vital space for critical dialogue and methodological innovation. The contributions collectively center the interpretive and value-based commitments that have long defined qualitative inquiry, engaging with AI critically and reflexively rather than on its own terms.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":"36 2-3","pages":"140-144"},"PeriodicalIF":2.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147349514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-10-09DOI: 10.1177/10497323251367177
Laura Ann Chubb, Suzette Jackson, Badhoora Naseer, Maree Matthews
Research indicates a positive correlation between residential treatment duration and residents' positive outcomes. Between 2015 and 2019, a New Zealand residential drug rehabilitation service noted a rise in premature program exits, leading to an in-depth investigation into the individual and therapeutic community factors that impact residents' completion of the 18-week program. The aim of the study was to understand how to enhance support mechanisms that promote longer treatment stays with the view to improving well-being outcomes. The authors conducted a two-phase, mixed-methods study. They applied quantitative secondary data analysis to data collected between 2015 and 2019 from 796 participants and did follow-up qualitative data collection in 2023, where 15 former residents participated in focus groups. Six were then randomly selected to participate in an in-depth interview. This article reports findings from the interviews of that study. The aims of this article are threefold. The authors introduce data from a New Zealand drug rehabilitation service as a case for using ChatGPT to support AI-assisted thematic narrative analysis. Steps in the analysis are detailed through a reproducible prompting process. Second, the authors present findings highlighting factors influencing residents to leave treatment and those that influenced them to stay. The authors position AI as a complementary tool for qualitative data analysis that enhances methodological rigor and practical applications in addiction research.
{"title":"To Leave or Stay? Influences on Early Exit and Completion in a New Zealand Residential Drug Rehabilitation Service.","authors":"Laura Ann Chubb, Suzette Jackson, Badhoora Naseer, Maree Matthews","doi":"10.1177/10497323251367177","DOIUrl":"10.1177/10497323251367177","url":null,"abstract":"<p><p>Research indicates a positive correlation between residential treatment duration and residents' positive outcomes. Between 2015 and 2019, a New Zealand residential drug rehabilitation service noted a rise in premature program exits, leading to an in-depth investigation into the individual and therapeutic community factors that impact residents' completion of the 18-week program. The aim of the study was to understand how to enhance support mechanisms that promote longer treatment stays with the view to improving well-being outcomes. The authors conducted a two-phase, mixed-methods study. They applied quantitative secondary data analysis to data collected between 2015 and 2019 from 796 participants and did follow-up qualitative data collection in 2023, where 15 former residents participated in focus groups. Six were then randomly selected to participate in an in-depth interview. This article reports findings from the interviews of that study. The aims of this article are threefold. The authors introduce data from a New Zealand drug rehabilitation service as a case for using ChatGPT to support AI-assisted thematic narrative analysis. Steps in the analysis are detailed through a reproducible prompting process. Second, the authors present findings highlighting factors influencing residents to leave treatment and those that influenced them to stay. The authors position AI as a complementary tool for qualitative data analysis that enhances methodological rigor and practical applications in addiction research.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"231-246"},"PeriodicalIF":2.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12949039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, artificial intelligence (AI) has gradually permeated the medical sector, bringing about multifaceted changes in healthcare practices. Existing studies demonstrate significant gains of AI for clinical application in terms of performance and innovation. While this literature largely emphasizes technological advancements, it often overlooks AI's human and professional implications. AI may not replace humans in the near future due to ethical, legal, and technical constraints, but it is already reshaping work practices as well as professional and institutional dynamics in ways that remain underexplored. This paper addresses this gap by focusing on physicians in hospital-based settings, where AI tools are already shaping clinical routines and professional roles. We therefore use a qualitative approach, conducting semi-structured interviews with 19 physicians from diverse specializations in Belgium, who use AI for clinical purposes. The analysis of the interviews, using the framework of identity work to explore how physicians make sense of their professional identity and legitimize their work in relation to AI, reveals the persistent tension between compliance and resistance. AI tools, even when having the potential to serve as substitutes, appear to be primarily used as complementary aids. Physicians often regard them as a second opinion, one they do not hesitate to override, rather than trusting them for decision-making. These findings are key to reassessing physicians' autonomy and agency in relation to AI, elucidating the processes by which physicians constantly negotiate their identity amid growing AI adoption.
{"title":"AI in Healthcare: Identity Threat or Opportunity? Insights From Medical Specialists.","authors":"Laurianne Terlinden, Aurélie Verachtert, Jellis Bollens","doi":"10.1177/10497323251387568","DOIUrl":"10.1177/10497323251387568","url":null,"abstract":"<p><p>In recent years, artificial intelligence (AI) has gradually permeated the medical sector, bringing about multifaceted changes in healthcare practices. Existing studies demonstrate significant gains of AI for clinical application in terms of performance and innovation. While this literature largely emphasizes technological advancements, it often overlooks AI's human and professional implications. AI may not replace humans in the near future due to ethical, legal, and technical constraints, but it is already reshaping work practices as well as professional and institutional dynamics in ways that remain underexplored. This paper addresses this gap by focusing on physicians in hospital-based settings, where AI tools are already shaping clinical routines and professional roles. We therefore use a qualitative approach, conducting semi-structured interviews with 19 physicians from diverse specializations in Belgium, who use AI for clinical purposes. The analysis of the interviews, using the framework of identity work to explore how physicians make sense of their professional identity and legitimize their work in relation to AI, reveals the persistent tension between compliance and resistance. AI tools, even when having the potential to serve as substitutes, appear to be primarily used as complementary aids. Physicians often regard them as a second opinion, one they do not hesitate to override, rather than trusting them for decision-making. These findings are key to reassessing physicians' autonomy and agency in relation to AI, elucidating the processes by which physicians constantly negotiate their identity amid growing AI adoption.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"289-303"},"PeriodicalIF":2.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145551719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-24DOI: 10.1177/10497323251401503
Andrew Prahl
Artificial intelligence (AI) is now routinely deployed in qualitative health. Comparative evaluations indicate that these systems reproduce coding methods but can falter on culturally nuanced or emotionally complex material. Conventional reflexivity guidelines focus on investigator positionality and provide limited guidance for assessing algorithmic influence at early stages in the analysis process. We introduce the AI-Reflexivity Checklist (ARC), a pre-analysis, evidence-informed checkpoint that sets the appropriate human-in-the-loop (HITL) posture-delegate, assist/augment, or human-led-for LLM-assisted qualitative coding of textual data. Literature from science and technology studies, empirical studies of AI-assisted qualitative analysis, and pragmatic workflow models informed the identification of five decision domains: descriptive scope, contextual variation, experiential depth, ethical exposure, and output reversibility. These domains are operationalized as five sequential prompts completed before AI is introduced. If the planned task is purely descriptive, meanings are stable across contexts, experiential nuance is minimal, ethical risk is low, and outputs can be fully revised or reversed; automation is permitted with routine human verification. Elevated ratings on experiential or ethical domains point to an assist/human-led posture unless pilot evidence meets pre-specified acceptance criteria; lack of reversibility remains a blocker because it precludes audit and repair. ARC extends existing reflexivity practice to encompass algorithmic actors, offers a brief record suitable for review, and mitigates early path-dependency toward indiscriminate automation.
{"title":"The AI-Reflexivity Checklist (ARC): A Pre-Analysis Pause for LLM-Assisted Coding.","authors":"Andrew Prahl","doi":"10.1177/10497323251401503","DOIUrl":"10.1177/10497323251401503","url":null,"abstract":"<p><p>Artificial intelligence (AI) is now routinely deployed in qualitative health. Comparative evaluations indicate that these systems reproduce coding methods but can falter on culturally nuanced or emotionally complex material. Conventional reflexivity guidelines focus on investigator positionality and provide limited guidance for assessing algorithmic influence at early stages in the analysis process. We introduce the AI-Reflexivity Checklist (ARC), a pre-analysis, evidence-informed checkpoint that sets the appropriate human-in-the-loop (HITL) posture-delegate, assist/augment, or human-led-for LLM-assisted qualitative coding of textual data. Literature from science and technology studies, empirical studies of AI-assisted qualitative analysis, and pragmatic workflow models informed the identification of five decision domains: descriptive scope, contextual variation, experiential depth, ethical exposure, and output reversibility. These domains are operationalized as five sequential prompts completed before AI is introduced. If the planned task is purely descriptive, meanings are stable across contexts, experiential nuance is minimal, ethical risk is low, and outputs can be fully revised or reversed; automation is permitted with routine human verification. Elevated ratings on experiential or ethical domains point to an assist/human-led posture unless pilot evidence meets pre-specified acceptance criteria; lack of reversibility remains a blocker because it precludes audit and repair. ARC extends existing reflexivity practice to encompass algorithmic actors, offers a brief record suitable for review, and mitigates early path-dependency toward indiscriminate automation.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"181-190"},"PeriodicalIF":2.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) is increasingly integrated into care systems, yet little is known about how care service providers perceive and respond to AI in their service provision in the context of supporting culturally and linguistically diverse migrants with disabilities. This study draws on an intersectionality-informed, arts-based research approach to explore how care providers make sense of AI, with attention to how their perceptions are shaped by social identities, professional experiences, and media narratives. A one-act play, constructed from data collected through participatory workshops with 15 care providers, illustrates that participants engage with AI as a relational, emotionally charged, and socially situated phenomenon. Their understanding reflected intersecting experiences of racialization, migration, gender, and labor precarity, as well as exposure to dominant media portrayals of AI. Their narratives showed a mix of fear, ambivalence, and cautious optimism rooted in concern about job security and loss of relational care, alongside hopes that AI might enhance accessibility and reduce human error. The play-based format captured the dialogic, affective, and embodied dimensions of participants' meaning-making, challenging technocratic and disembodied ways of knowing about AI and care. Findings suggest that inclusive and reflective spaces are critical for care providers to engage meaningfully with AI technologies and that intersectionality must inform the design, governance, and implementation of AI in care settings.
{"title":"Relational Meanings of AI in Disability Care: An Intersectional, Arts-Based Inquiry.","authors":"Karen Soldatic, Rohini Balram, Mikyung Lee, Tommaso Santilli, Liam Magee","doi":"10.1177/10497323251401541","DOIUrl":"10.1177/10497323251401541","url":null,"abstract":"<p><p>Artificial intelligence (AI) is increasingly integrated into care systems, yet little is known about how care service providers perceive and respond to AI in their service provision in the context of supporting culturally and linguistically diverse migrants with disabilities. This study draws on an intersectionality-informed, arts-based research approach to explore how care providers make sense of AI, with attention to how their perceptions are shaped by social identities, professional experiences, and media narratives. A one-act play, constructed from data collected through participatory workshops with 15 care providers, illustrates that participants engage with AI as a relational, emotionally charged, and socially situated phenomenon. Their understanding reflected intersecting experiences of racialization, migration, gender, and labor precarity, as well as exposure to dominant media portrayals of AI. Their narratives showed a mix of fear, ambivalence, and cautious optimism rooted in concern about job security and loss of relational care, alongside hopes that AI might enhance accessibility and reduce human error. The play-based format captured the dialogic, affective, and embodied dimensions of participants' meaning-making, challenging technocratic and disembodied ways of knowing about AI and care. Findings suggest that inclusive and reflective spaces are critical for care providers to engage meaningfully with AI technologies and that intersectionality must inform the design, governance, and implementation of AI in care settings.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"304-316"},"PeriodicalIF":2.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12949034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145744745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-08DOI: 10.1177/10497323251365211
Magnhild Vikan, Ramtin Aryan, Mari Serine Kannelønning, Michael Alexander Riegler, Stein Ove Danielsen
The launch of ChatGPT in November 2022 accelerated discussions and research into whether base large language models (LLMs) could increase the efficiency of qualitative analysis phases or even replace qualitative researchers. Reflexive thematic analysis (RTA) is a commonly used method for qualitative text analysis that emphasizes the researcher's subjectivity and reflexivity to enable a situated, in-depth understanding of knowledge generation. Researchers appear optimistic about the potential of LLMs in qualitative research; however, questions remain about whether base models can meaningfully contribute to the interpretation and abstraction of a dataset. The primary objective of this study was to explore how LLMs may support an RTA of an interview text from health science research. Secondary objectives included identifying recommended prompt strategies for similar studies, highlighting potential weaknesses or challenges, and fostering engagement among qualitative researchers regarding these threats and possibilities. We provided the interview file to an offline LLM and conducted a series of tests aligned with the phases of RTA. Insights from each test guided refinements to the next and contributed to the development of a recommended prompt strategy. At this stage, base LLMs provide limited support and do not increase the efficiency of RTA. At best, LLMs may identify gaps in the researchers' perspectives. Realizing the potential of LLMs to inspire broader discussion and deeper reflections requires a well-defined strategy and the avoidance of misleading prompts, self-referential responses, misguiding translations, and errors. Conclusively, high-quality RTA requires a human, comprehensive familiarization phase, and methodological competence to preserve epistemological integrity.
{"title":"Reflecting on LLM Support in Reflexive Thematic Analysis: An Exploratory Study.","authors":"Magnhild Vikan, Ramtin Aryan, Mari Serine Kannelønning, Michael Alexander Riegler, Stein Ove Danielsen","doi":"10.1177/10497323251365211","DOIUrl":"10.1177/10497323251365211","url":null,"abstract":"<p><p>The launch of ChatGPT in November 2022 accelerated discussions and research into whether base large language models (LLMs) could increase the efficiency of qualitative analysis phases or even replace qualitative researchers. Reflexive thematic analysis (RTA) is a commonly used method for qualitative text analysis that emphasizes the researcher's subjectivity and reflexivity to enable a situated, in-depth understanding of knowledge generation. Researchers appear optimistic about the potential of LLMs in qualitative research; however, questions remain about whether base models can meaningfully contribute to the interpretation and abstraction of a dataset. The primary objective of this study was to explore how LLMs may support an RTA of an interview text from health science research. Secondary objectives included identifying recommended prompt strategies for similar studies, highlighting potential weaknesses or challenges, and fostering engagement among qualitative researchers regarding these threats and possibilities. We provided the interview file to an offline LLM and conducted a series of tests aligned with the phases of RTA. Insights from each test guided refinements to the next and contributed to the development of a recommended prompt strategy. At this stage, base LLMs provide limited support and do not increase the efficiency of RTA. At best, LLMs may identify gaps in the researchers' perspectives. Realizing the potential of LLMs to inspire broader discussion and deeper reflections requires a well-defined strategy and the avoidance of misleading prompts, self-referential responses, misguiding translations, and errors. Conclusively, high-quality RTA requires a human, comprehensive familiarization phase, and methodological competence to preserve epistemological integrity.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"191-205"},"PeriodicalIF":2.4,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12949038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-28DOI: 10.1177/10497323261417232
Paul Sharp, Nina Gao, Matthew Sha, Trevor Goodyear, John L Oliffe
Online recruitment and data collection in qualitative research grew significantly during the COVID-19 pandemic, revealing a host of benefits including cost and time savings for researchers and participants. However, significant risks and limitations exist when recruiting and interviewing participants online. 'Imposter participants' have emerged, seemingly incentivized by study honoraria. These imposter participants invoke significant administrative burdens and call into question data integrity and researcher commitment to equitable and inclusive sampling. This article features insights drawn from experiences of conducting online recruitment for a Canadian photovoice study of men's mental health and peer support in three themes: (1) Gone Phishing: Detecting and Deterring Imposters, (2) Screening for Subterfuge: Balancing Integrity and Inclusivity, and (3) Fraud Fatigue: Researcher Strain and Drain. The first theme, Gone Phishing: Detecting and Deterring Imposters, outlines processes for identifying imposter participants, including technological tools and human strategies. Screening for Subterfuge: Balancing Integrity and Inclusivity chronicles ethical implications and researcher adaptions for ensuring that authentic eligible participants are not inadvertently excluded. The third theme, Fraud Fatigue: Researcher Strain and Drain, details the workload and distress that researchers can face in dealing with imposter participants, while thoughtfully considering avenues for reducing these potential harms. Findings across these themes underscore the potential for imposter participants to increase project costs and compromise data integrity for online qualitative research. Implicating the need for strategies, recommendations are made for supporting researchers and upgrading university systems to improve security and risk management guidelines for managing imposter participants, especially in the wake of artificial intelligence-generated scams.
{"title":"Data or Deception: Imposter Participants in Online Qualitative Research.","authors":"Paul Sharp, Nina Gao, Matthew Sha, Trevor Goodyear, John L Oliffe","doi":"10.1177/10497323261417232","DOIUrl":"https://doi.org/10.1177/10497323261417232","url":null,"abstract":"<p><p>Online recruitment and data collection in qualitative research grew significantly during the COVID-19 pandemic, revealing a host of benefits including cost and time savings for researchers and participants. However, significant risks and limitations exist when recruiting and interviewing participants online. 'Imposter participants' have emerged, seemingly incentivized by study honoraria. These imposter participants invoke significant administrative burdens and call into question data integrity and researcher commitment to equitable and inclusive sampling. This article features insights drawn from experiences of conducting online recruitment for a Canadian photovoice study of men's mental health and peer support in three themes: (1) Gone Phishing: Detecting and Deterring Imposters, (2) Screening for Subterfuge: Balancing Integrity and Inclusivity, and (3) Fraud Fatigue: Researcher Strain and Drain. The first theme, Gone Phishing: Detecting and Deterring Imposters, outlines processes for identifying imposter participants, including technological tools and human strategies. Screening for Subterfuge: Balancing Integrity and Inclusivity chronicles ethical implications and researcher adaptions for ensuring that authentic eligible participants are not inadvertently excluded. The third theme, Fraud Fatigue: Researcher Strain and Drain, details the workload and distress that researchers can face in dealing with imposter participants, while thoughtfully considering avenues for reducing these potential harms. Findings across these themes underscore the potential for imposter participants to increase project costs and compromise data integrity for online qualitative research. Implicating the need for strategies, recommendations are made for supporting researchers and upgrading university systems to improve security and risk management guidelines for managing imposter participants, especially in the wake of artificial intelligence-generated scams.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"10497323261417232"},"PeriodicalIF":2.4,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147322034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-28DOI: 10.1177/10497323261426176
Michaela Ann Sparringa, Lenora Duhn, Pilar Camargo-Plazas
A key standard of the Canadian health care system is reasonable access to health services for all-yet, this remains unfulfilled for many women facing financial hardships. Women living on a low income are more likely to experience anxiety, depression, and harmful health behaviors. While researchers have explored access among equity-deserving women, few have used a narrative approach and none have applied dialogic or performative analysis-a method that examines the interactive nature of storytelling and how narratives function as actions that shape identity and social reality-in a qualitative secondary data analysis about women living in Canada. Interview/focus group transcripts from a primary study about five women were revisited to address the new question: What stories do women living on a low income have about accessing health care services in Kingston, Canada? Participants' accounts were framed as theatrical scenes, portraying structural and emotional dynamics shaping their health care experiences. Core scenes emerged: rejection and exclusion-when access is denied or limited; health care information-when directions fail; and the need for reassurance and trust in relationships with health care providers. Limitations in social determinants (e.g., housing, food access, and transportation) were a through line regarding access, and despite which, participants persisted and adapted. Their stories evidence the pressing need for re-designed systems prioritizing equity, compassion, and clear communication. This study shows the realities of those often overlooked in policy discussions and demonstrates the depth of a narrative approach for revealing how care is lived.
{"title":"\"It Shouldn't Be This Hard\": Women's Experiences Accessing Health Care While Living With a Low Income.","authors":"Michaela Ann Sparringa, Lenora Duhn, Pilar Camargo-Plazas","doi":"10.1177/10497323261426176","DOIUrl":"https://doi.org/10.1177/10497323261426176","url":null,"abstract":"<p><p>A key standard of the Canadian health care system is reasonable access to health services for all-yet, this remains unfulfilled for many women facing financial hardships. Women living on a low income are more likely to experience anxiety, depression, and harmful health behaviors. While researchers have explored access among equity-deserving women, few have used a narrative approach and none have applied dialogic or performative analysis-<i>a method that examines the interactive nature of storytelling and how narratives function as actions that shape identity and social reality</i>-in a qualitative secondary data analysis about women living in Canada. Interview/focus group transcripts from a primary study about five women were revisited to address the new question: What stories do women living on a low income have about accessing health care services in Kingston, Canada? Participants' accounts were framed as theatrical scenes, portraying structural and emotional dynamics shaping their health care experiences. Core scenes emerged: <i>rejection and exclusion-when access is denied or limited</i>; <i>health care information-when directions fail</i>; and <i>the need for reassurance and trust in relationships with health care providers</i>. Limitations in social determinants (e.g., housing, food access, and transportation) were a through line regarding access, and despite which, participants persisted and adapted. Their stories evidence the pressing need for re-designed systems prioritizing equity, compassion, and clear communication. This study shows the realities of those often overlooked in policy discussions and demonstrates the depth of a narrative approach for revealing how care is lived.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"10497323261426176"},"PeriodicalIF":2.4,"publicationDate":"2026-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147322021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-24DOI: 10.1177/10497323261421127
Lili R Romann, Elizabeth A Hintz, Jacqueline N Gunning, Shardé M Davis, Sarah N Boateng
Women of color with autoimmune disease experience communicative dilemmas at the intersection of their triply minoritized (i.e., ethnic-racial, gender, and illness) identities during interactions with their healthcare providers (HCPs), which shape their care. Sensitized by intersectionality and normative rhetorical theory (NRT), the present study interrogates taken-for-granted assumptions reflected in HCPs' communication, as recalled by 150 Black and African, Hispanic and Latina, Native American and Alaska Native, and Multiracial women of color with autoimmune disease. Using critical thematic analysis, we identify experiences of dismissal of symptoms related to autoimmune disease, illustrated through in vivo themes including (a) "another crazy woman," (b) "assumed I was drug-seeking," (c) "blamed it on my weight," and (d) autoimmunity as elusive. We also identified conflicting conversational purposes, including (a) interdependent task and relational purposes, (b) the overriding salience of identity purposes, and (c) interactions with healthcare providers who shared identities with patients. We extend NRT by asserting that purposes can vary in magnitude and relevance pertaining to a given context and offer practical implications for HCPs.
{"title":"Navigating Healthcare as a Woman of Color With Autoimmune Disease: Intersectional Dilemmas in Patient-Provider Interactions.","authors":"Lili R Romann, Elizabeth A Hintz, Jacqueline N Gunning, Shardé M Davis, Sarah N Boateng","doi":"10.1177/10497323261421127","DOIUrl":"https://doi.org/10.1177/10497323261421127","url":null,"abstract":"<p><p>Women of color with autoimmune disease experience communicative dilemmas at the intersection of their triply minoritized (i.e., ethnic-racial, gender, and illness) identities during interactions with their healthcare providers (HCPs), which shape their care. Sensitized by intersectionality and normative rhetorical theory (NRT), the present study interrogates taken-for-granted assumptions reflected in HCPs' communication, as recalled by 150 Black and African, Hispanic and Latina, Native American and Alaska Native, and Multiracial women of color with autoimmune disease. Using critical thematic analysis, we identify experiences of dismissal of symptoms related to autoimmune disease, illustrated through <i>in vivo</i> themes including (a) \"another crazy woman,\" (b) \"assumed I was drug-seeking,\" (c) \"blamed it on my weight,\" and (d) autoimmunity as elusive. We also identified conflicting conversational purposes, including (a) interdependent task and relational purposes, (b) the overriding salience of identity purposes, and (c) interactions with healthcare providers who shared identities with patients. We extend NRT by asserting that purposes can vary in magnitude and relevance pertaining to a given context and offer practical implications for HCPs.</p>","PeriodicalId":48437,"journal":{"name":"Qualitative Health Research","volume":" ","pages":"10497323261421127"},"PeriodicalIF":2.4,"publicationDate":"2026-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}