Animesh Tandon, Bryan Cobb, Jacob Centra, Elena Izmailova, Nikolay V Manyakov, Samantha McClenahan, Smit Patel, Emre Sezgin, Srinivasan Vairavan, Bernard Vrijens, Jessie P Bakker
{"title":"基于传感器的数字健康技术的人为因素、以人为本的设计和可用性:范围审查。","authors":"Animesh Tandon, Bryan Cobb, Jacob Centra, Elena Izmailova, Nikolay V Manyakov, Samantha McClenahan, Smit Patel, Emre Sezgin, Srinivasan Vairavan, Bernard Vrijens, Jessie P Bakker","doi":"10.2196/57628","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Increasing adoption of sensor-based digital health technologies (sDHTs) in recent years has cast light on the many challenges in implementing these tools into clinical trials and patient care at scale across diverse patient populations; however, the methodological approaches taken toward sDHT usability evaluation have varied markedly.</p><p><strong>Objective: </strong>This review aims to explore the current landscape of studies reporting data related to sDHT human factors, human-centered design, and usability, to inform our concurrent work on developing an evaluation framework for sDHT usability.</p><p><strong>Methods: </strong>We conducted a scoping review of studies published between 2013 and 2023 and indexed in PubMed, in which data related to sDHT human factors, human-centered design, and usability were reported. Following a systematic screening process, we extracted the study design, participant sample, the sDHT or sDHTs used, the methods of data capture, and the types of usability-related data captured.</p><p><strong>Results: </strong>Our literature search returned 442 papers, of which 85 papers were found to be eligible and 83 papers were available for data extraction and not under embargo. In total, 164 sDHTs were evaluated; 141 (86%) sDHTs were wearable tools while the remaining 23 (14%) sDHTs were ambient tools. The majority of studies (55/83, 66%) reported summative evaluations of final-design sDHTs. Almost all studies (82/83, 99%) captured data from targeted end users, but only 18 (22%) out of 83 studies captured data from additional users such as care partners or clinicians. User satisfaction and ease of use were evaluated for 83% (136/164) and 91% (150/164) of sDHTs, respectively; however, learnability, efficiency, and memorability were reported for only 11 (7%), 4 (2%), and 2 (1%) out of 164 sDHTs, respectively. A total of 14 (9%) out of 164 sDHTs were evaluated according to the extent to which users were able to understand the clinical data or other information presented to them (understandability) or the actions or tasks they should complete in response (actionability). Notable gaps in reporting included the absence of a sample size rationale (reported for 21/83, 25% of all studies and 17/55, 31% of summative studies) and incomplete sociodemographic descriptive data (complete age, sex/gender, and race/ethnicity reported for 14/83, 17% of studies).</p><p><strong>Conclusions: </strong>Based on our findings, we suggest four actionable recommendations for future studies that will help to advance the implementation of sDHTs: (1) consider an in-depth assessment of technology usability beyond user satisfaction and ease of use, (2) expand recruitment to include important user groups such as clinicians and care partners, (3) report the rationale for key study design considerations including the sample size, and (4) provide rich descriptive statistics regarding the study sample to allow a complete understanding of generalizability to other patient populations and contexts of use.</p>","PeriodicalId":16337,"journal":{"name":"Journal of Medical Internet Research","volume":"26 ","pages":"e57628"},"PeriodicalIF":5.8000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human Factors, Human-Centered Design, and Usability of Sensor-Based Digital Health Technologies: Scoping Review.\",\"authors\":\"Animesh Tandon, Bryan Cobb, Jacob Centra, Elena Izmailova, Nikolay V Manyakov, Samantha McClenahan, Smit Patel, Emre Sezgin, Srinivasan Vairavan, Bernard Vrijens, Jessie P Bakker\",\"doi\":\"10.2196/57628\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Increasing adoption of sensor-based digital health technologies (sDHTs) in recent years has cast light on the many challenges in implementing these tools into clinical trials and patient care at scale across diverse patient populations; however, the methodological approaches taken toward sDHT usability evaluation have varied markedly.</p><p><strong>Objective: </strong>This review aims to explore the current landscape of studies reporting data related to sDHT human factors, human-centered design, and usability, to inform our concurrent work on developing an evaluation framework for sDHT usability.</p><p><strong>Methods: </strong>We conducted a scoping review of studies published between 2013 and 2023 and indexed in PubMed, in which data related to sDHT human factors, human-centered design, and usability were reported. Following a systematic screening process, we extracted the study design, participant sample, the sDHT or sDHTs used, the methods of data capture, and the types of usability-related data captured.</p><p><strong>Results: </strong>Our literature search returned 442 papers, of which 85 papers were found to be eligible and 83 papers were available for data extraction and not under embargo. In total, 164 sDHTs were evaluated; 141 (86%) sDHTs were wearable tools while the remaining 23 (14%) sDHTs were ambient tools. The majority of studies (55/83, 66%) reported summative evaluations of final-design sDHTs. Almost all studies (82/83, 99%) captured data from targeted end users, but only 18 (22%) out of 83 studies captured data from additional users such as care partners or clinicians. User satisfaction and ease of use were evaluated for 83% (136/164) and 91% (150/164) of sDHTs, respectively; however, learnability, efficiency, and memorability were reported for only 11 (7%), 4 (2%), and 2 (1%) out of 164 sDHTs, respectively. A total of 14 (9%) out of 164 sDHTs were evaluated according to the extent to which users were able to understand the clinical data or other information presented to them (understandability) or the actions or tasks they should complete in response (actionability). Notable gaps in reporting included the absence of a sample size rationale (reported for 21/83, 25% of all studies and 17/55, 31% of summative studies) and incomplete sociodemographic descriptive data (complete age, sex/gender, and race/ethnicity reported for 14/83, 17% of studies).</p><p><strong>Conclusions: </strong>Based on our findings, we suggest four actionable recommendations for future studies that will help to advance the implementation of sDHTs: (1) consider an in-depth assessment of technology usability beyond user satisfaction and ease of use, (2) expand recruitment to include important user groups such as clinicians and care partners, (3) report the rationale for key study design considerations including the sample size, and (4) provide rich descriptive statistics regarding the study sample to allow a complete understanding of generalizability to other patient populations and contexts of use.</p>\",\"PeriodicalId\":16337,\"journal\":{\"name\":\"Journal of Medical Internet Research\",\"volume\":\"26 \",\"pages\":\"e57628\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Internet Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.2196/57628\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Internet Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/57628","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Human Factors, Human-Centered Design, and Usability of Sensor-Based Digital Health Technologies: Scoping Review.
Background: Increasing adoption of sensor-based digital health technologies (sDHTs) in recent years has cast light on the many challenges in implementing these tools into clinical trials and patient care at scale across diverse patient populations; however, the methodological approaches taken toward sDHT usability evaluation have varied markedly.
Objective: This review aims to explore the current landscape of studies reporting data related to sDHT human factors, human-centered design, and usability, to inform our concurrent work on developing an evaluation framework for sDHT usability.
Methods: We conducted a scoping review of studies published between 2013 and 2023 and indexed in PubMed, in which data related to sDHT human factors, human-centered design, and usability were reported. Following a systematic screening process, we extracted the study design, participant sample, the sDHT or sDHTs used, the methods of data capture, and the types of usability-related data captured.
Results: Our literature search returned 442 papers, of which 85 papers were found to be eligible and 83 papers were available for data extraction and not under embargo. In total, 164 sDHTs were evaluated; 141 (86%) sDHTs were wearable tools while the remaining 23 (14%) sDHTs were ambient tools. The majority of studies (55/83, 66%) reported summative evaluations of final-design sDHTs. Almost all studies (82/83, 99%) captured data from targeted end users, but only 18 (22%) out of 83 studies captured data from additional users such as care partners or clinicians. User satisfaction and ease of use were evaluated for 83% (136/164) and 91% (150/164) of sDHTs, respectively; however, learnability, efficiency, and memorability were reported for only 11 (7%), 4 (2%), and 2 (1%) out of 164 sDHTs, respectively. A total of 14 (9%) out of 164 sDHTs were evaluated according to the extent to which users were able to understand the clinical data or other information presented to them (understandability) or the actions or tasks they should complete in response (actionability). Notable gaps in reporting included the absence of a sample size rationale (reported for 21/83, 25% of all studies and 17/55, 31% of summative studies) and incomplete sociodemographic descriptive data (complete age, sex/gender, and race/ethnicity reported for 14/83, 17% of studies).
Conclusions: Based on our findings, we suggest four actionable recommendations for future studies that will help to advance the implementation of sDHTs: (1) consider an in-depth assessment of technology usability beyond user satisfaction and ease of use, (2) expand recruitment to include important user groups such as clinicians and care partners, (3) report the rationale for key study design considerations including the sample size, and (4) provide rich descriptive statistics regarding the study sample to allow a complete understanding of generalizability to other patient populations and contexts of use.
期刊介绍:
The Journal of Medical Internet Research (JMIR) is a highly respected publication in the field of health informatics and health services. With a founding date in 1999, JMIR has been a pioneer in the field for over two decades.
As a leader in the industry, the journal focuses on digital health, data science, health informatics, and emerging technologies for health, medicine, and biomedical research. It is recognized as a top publication in these disciplines, ranking in the first quartile (Q1) by Impact Factor.
Notably, JMIR holds the prestigious position of being ranked #1 on Google Scholar within the "Medical Informatics" discipline.