Study objective: Emergency department (ED) patient experience surveys, such as the Emergency Department Consumer Assessment of Healthcare Providers and Systems (ED CAHPS), NRCHealth, and Press Ganey, often include more than or equal to 10 items designed to measure distinct aspects of patient-centered care. However, it is unclear whether responses reflect unique constructs or primarily represent patients' overall experience. Our objective is to determine the extent to which ED patient experience survey items capture distinct constructs and examine the association between constructs and clinical/operational factors.
Methods: We conducted a cross-sectional study of NRCHealth ED patient experience surveys collected from 13 EDs within a large regional health system between January 2022 and December 2023. Survey responses were merged with electronic health record data, including patient demographics, wait times, hallway bed placement, initial and change in pain scores, and ED crowding. Exploratory factor analysis using tetrachoric correlations was performed to assess dimensionality of very positive survey responses, which are referred to as "top-box" responses. Logistic regression was used to estimate associations between individual survey items and clinical and operational predictors.
Results: Among 58,523 respondents, factor analysis demonstrated that survey items loaded strongly (0.83 to 0.96) on a single underlying factor. Logistic regression showed that individual items had similar associations with operational factors, particularly hallway bed placement, wait times, and ED crowding, despite measuring conceptually distinct aspects of care.
Conclusion: ED patient experience survey items may reflect overall experience rather than distinct constructs. Shorter surveys or alternative formats, such as incorporating free-text responses with natural language processing, may improve the efficiency and interpretability of patient experience measurement.
Study objective: To compare the use of ambient artificial intelligence (AI) versus human scribes in the emergency department in terms of note quality and time spent in the electronic health record.
Methods: A quality improvement pilot was performed with 5 early adopters from December 2024 to January 2025. Physicians were assigned to a human or AI scribe. Two physicians, blinded to the chart's origin, scored notes using the Physician Documentation Quality Instrument (PDQI-9). We accessed our electronic health record for time metrics and note contributions and compared PDQI-9 scores, time metrics, and note contribution between groups.
Results: There were 710 visits, 284 with human scribes (123 adult and 161 pediatric) and 426 with AI-assisted charting (271 adult, 155 pediatric). PDQI-9 scores were similar for adults, but AI scribes scored lower for pediatric patients (41.36 versus 42.25, adjusted risk ratio [aRR] = -1.89 [95% confidence interval (CI) -3.58 to -0.20]). More time in the electronic health record notes section per patient was spent when using AI scribes (adult: 4.3 versus 1.8 minutes, aRR = 2.38 [95% CI 1.85 to 3.05]; pediatric: 3.5 versus 1.6 minutes, aRR = 2.21 [95% CI 1.94 to 2.51]). Note length was similar but physicians contributed significantly more characters per note when using AI (adult: 60.1% versus 30.8%, adjusted mean differences = 32.9 [95% CI 20.8 to 45.0]; pediatric: 62.3% versus 27.1%, adjusted mean differences = 35.2 [95% CI 29.7 to 40.7]).
Conclusion: In comparison to human scribes, AI scribes were associated with more time spent in the electronic health record notes section, more physician note contribution, and similar to lower quality notes.

