Marvin Kopka, Sonja Mei Wang, Samira Kunz, Christine Schmid, Markus A. Feufel
{"title":"Technology-Supported Self-Triage Decision Making: A Mixed-Methods Study","authors":"Marvin Kopka, Sonja Mei Wang, Samira Kunz, Christine Schmid, Markus A. Feufel","doi":"10.1101/2024.09.12.24313558","DOIUrl":null,"url":null,"abstract":"Symptom-Assessment Application (SAAs) and Large Language Models (LLMs) are increasingly used by laypeople to navigate care options. Although humans ultimately make a final decision when using these systems, previous research has typically examined the performance of humans and SAAs/LLMs separately. Thus, it is unclear how decision-making unfolds in such hybrid human-technology teams and if SAAs/LLMs can improve laypeople's decisions. To address this gap, we conducted a convergent parallel mixed-methods study with semi-structured interviews and a randomized controlled trial. Our interview data revealed that in human-technology teams, decision-making is influenced by factors before, during, and after interaction. Users tend to rely on technology for information gathering and analysis but remain responsible for information integration and the final decision. Based on these results, we developed a model for technology-assisted self-triage decision-making. Our quantitative results indicate that when using a high-performing SAA, laypeople's decision accuracy improved from 53.2% to 64.5% (OR = 2.52, p < .001). In contrast, decision accuracy remained unchanged when using a LLM (54.8% before vs. 54.2% after usage, p = .79). These findings highlight the importance of studying SAAs/LLMs with humans in the loop, as opposed to analyzing them in isolation.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"44 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Health Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.09.12.24313558","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Symptom-Assessment Application (SAAs) and Large Language Models (LLMs) are increasingly used by laypeople to navigate care options. Although humans ultimately make a final decision when using these systems, previous research has typically examined the performance of humans and SAAs/LLMs separately. Thus, it is unclear how decision-making unfolds in such hybrid human-technology teams and if SAAs/LLMs can improve laypeople's decisions. To address this gap, we conducted a convergent parallel mixed-methods study with semi-structured interviews and a randomized controlled trial. Our interview data revealed that in human-technology teams, decision-making is influenced by factors before, during, and after interaction. Users tend to rely on technology for information gathering and analysis but remain responsible for information integration and the final decision. Based on these results, we developed a model for technology-assisted self-triage decision-making. Our quantitative results indicate that when using a high-performing SAA, laypeople's decision accuracy improved from 53.2% to 64.5% (OR = 2.52, p < .001). In contrast, decision accuracy remained unchanged when using a LLM (54.8% before vs. 54.2% after usage, p = .79). These findings highlight the importance of studying SAAs/LLMs with humans in the loop, as opposed to analyzing them in isolation.