实验室和远程人为因素验证的比较-一项试点研究

Karoline Johnsen, Bernhard Wandtner, M. Thorwarth
{"title":"实验室和远程人为因素验证的比较-一项试点研究","authors":"Karoline Johnsen, Bernhard Wandtner, M. Thorwarth","doi":"10.54941/ahfe1002128","DOIUrl":null,"url":null,"abstract":"The possibility of conducting human factors validations remotely becomes increasingly important, not only due to the COVID-19 pandemic. However, there is a lack of research addressing the reliability of remotely obtained data in the field of medical products. Observability seems to be a key factor and has therefore be ensured in remote setups. This research focuses on producing and analyzing first data to compare lab-based and remote-based setups. The goal is to evaluate if and under which circumstances human factors validations of medical devices could be conducted remotely and which methodological aspects must be considered. In a simulated human factors validation / usability test, two lab-based and two remote-based conditions were investigated. The lab-based observer was present in the test room during the evaluation. Afterwards, the session’s recording could be reviewed as a second variant of the lab-based observation. The remote-based observer had the recording as a resource for observation only and the chance to review it afterwards as a second condition. The observations were based on a simulated human factors validation for two different medical products (device and software). The main basis for data analysis was an observation protocol in which the individual actions to be performed were categorized by the two observer groups according to classification derived from FDA’s Human Factors Guidance. Five human factors professionals in the lab-based and the remote-based setup respectively, with prior knowledge about both products in focus of the evaluation, generated the protocol data. The datasets from the lab-based and the remote-based observations were compared regarding their level of agreement. In addition, the quality of observations was assessed by comparing them to a sample solution, including the effect of the setups on the observers’ cognitive workload. Descriptively assessed, any-two agreement and Cohen´s κ calculations showed differences in observations of the lab-based vs. remote-based setup that became smaller when potentially critical actions were in focus. For the medical software less than 10% of the observations differed compared to around 15% for the medical device considering only critical use errors. The quality of observations was slightly higher when the observer was on-site, and better overall for the medical device compared to medical software regarding percentual agreement with the sample solution. Interestingly, a particularly high cognitive workload occurred when the medical device was observed remotely comparing the total NASA-TLX scores between the setups. Findings do not seem to strongly favor either lab-based or remote-based setups. For the medical device, the lab-based observation seemed to be more appropriate while for the medical software the result is not clear. However, remote observation performed better for the medical software than for the medical device. Observing the evaluation remotely and verifying the results with the help of video recordings detected the highest number of critical use errors. Overall, initial results from the feasibility study highlight the potential of remote evaluations. However, more research is needed to validate the results with a larger sample size and determine the influencing factors that might favor remote vs. lab-based approaches.","PeriodicalId":389399,"journal":{"name":"Healthcare and Medical Devices","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparison of Lab- and Remote-Based Human Factors Validation – A Pilot Study\",\"authors\":\"Karoline Johnsen, Bernhard Wandtner, M. Thorwarth\",\"doi\":\"10.54941/ahfe1002128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The possibility of conducting human factors validations remotely becomes increasingly important, not only due to the COVID-19 pandemic. However, there is a lack of research addressing the reliability of remotely obtained data in the field of medical products. Observability seems to be a key factor and has therefore be ensured in remote setups. This research focuses on producing and analyzing first data to compare lab-based and remote-based setups. The goal is to evaluate if and under which circumstances human factors validations of medical devices could be conducted remotely and which methodological aspects must be considered. In a simulated human factors validation / usability test, two lab-based and two remote-based conditions were investigated. The lab-based observer was present in the test room during the evaluation. Afterwards, the session’s recording could be reviewed as a second variant of the lab-based observation. The remote-based observer had the recording as a resource for observation only and the chance to review it afterwards as a second condition. The observations were based on a simulated human factors validation for two different medical products (device and software). The main basis for data analysis was an observation protocol in which the individual actions to be performed were categorized by the two observer groups according to classification derived from FDA’s Human Factors Guidance. Five human factors professionals in the lab-based and the remote-based setup respectively, with prior knowledge about both products in focus of the evaluation, generated the protocol data. The datasets from the lab-based and the remote-based observations were compared regarding their level of agreement. In addition, the quality of observations was assessed by comparing them to a sample solution, including the effect of the setups on the observers’ cognitive workload. Descriptively assessed, any-two agreement and Cohen´s κ calculations showed differences in observations of the lab-based vs. remote-based setup that became smaller when potentially critical actions were in focus. For the medical software less than 10% of the observations differed compared to around 15% for the medical device considering only critical use errors. The quality of observations was slightly higher when the observer was on-site, and better overall for the medical device compared to medical software regarding percentual agreement with the sample solution. Interestingly, a particularly high cognitive workload occurred when the medical device was observed remotely comparing the total NASA-TLX scores between the setups. Findings do not seem to strongly favor either lab-based or remote-based setups. For the medical device, the lab-based observation seemed to be more appropriate while for the medical software the result is not clear. However, remote observation performed better for the medical software than for the medical device. Observing the evaluation remotely and verifying the results with the help of video recordings detected the highest number of critical use errors. Overall, initial results from the feasibility study highlight the potential of remote evaluations. However, more research is needed to validate the results with a larger sample size and determine the influencing factors that might favor remote vs. lab-based approaches.\",\"PeriodicalId\":389399,\"journal\":{\"name\":\"Healthcare and Medical Devices\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Healthcare and Medical Devices\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54941/ahfe1002128\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Healthcare and Medical Devices","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002128","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

不仅由于新冠肺炎大流行,远程进行人为因素验证的可能性变得越来越重要。然而,缺乏针对医疗产品领域远程获取数据可靠性的研究。可观测性似乎是一个关键因素,因此在远程设置中得到了保证。本研究的重点是生成和分析第一批数据,以比较基于实验室和远程的设置。目标是评估是否以及在何种情况下可以远程进行医疗器械的人为因素验证,以及必须考虑哪些方法学方面。在模拟的人为因素验证/可用性测试中,研究了两种基于实验室和两种远程的条件。在评估期间,实验室观察员在场。之后,会议的记录可以作为实验室观察的第二种变体进行审查。远程观察员将记录仅作为观察的资源,并有机会在事后审查它作为第二个条件。观察结果基于对两种不同医疗产品(设备和软件)的模拟人为因素验证。数据分析的主要基础是一个观察方案,其中两个观察组根据FDA人为因素指南的分类对将要执行的个人行为进行分类。分别在实验室和远程设置的五名人为因素专业人员,对评估重点的两种产品都有先验知识,生成协议数据。对来自实验室和远程观测的数据集的一致性进行了比较。此外,通过将观察结果与样本溶液进行比较来评估观察结果的质量,包括设置对观察者认知负荷的影响。描述性评估,任意二一致性和Cohen’s κ计算显示了基于实验室和基于远程设置的观察差异,当潜在的关键动作集中时,这种差异变得更小。对于医疗软件,不到10%的观察结果不同,而仅考虑关键使用错误的医疗设备,这一比例约为15%。当观察者在现场时,观察的质量略高,并且与医疗软件相比,医疗设备总体上与样本溶液的百分比一致性更好。有趣的是,当观察到医疗设备远程比较设置之间的NASA-TLX总分时,会出现特别高的认知负荷。研究结果似乎并不强烈支持基于实验室或远程的设置。对于医疗设备,基于实验室的观察似乎更合适,而对于医疗软件,结果并不清楚。然而,远程观察对医疗软件的效果优于对医疗设备的效果。远程观察评估并借助视频记录验证结果,检测到的关键使用错误数量最多。总的来说,可行性研究的初步结果突出了远程评价的潜力。然而,需要更多的研究来验证更大样本量的结果,并确定可能有利于远程和基于实验室的方法的影响因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Comparison of Lab- and Remote-Based Human Factors Validation – A Pilot Study
The possibility of conducting human factors validations remotely becomes increasingly important, not only due to the COVID-19 pandemic. However, there is a lack of research addressing the reliability of remotely obtained data in the field of medical products. Observability seems to be a key factor and has therefore be ensured in remote setups. This research focuses on producing and analyzing first data to compare lab-based and remote-based setups. The goal is to evaluate if and under which circumstances human factors validations of medical devices could be conducted remotely and which methodological aspects must be considered. In a simulated human factors validation / usability test, two lab-based and two remote-based conditions were investigated. The lab-based observer was present in the test room during the evaluation. Afterwards, the session’s recording could be reviewed as a second variant of the lab-based observation. The remote-based observer had the recording as a resource for observation only and the chance to review it afterwards as a second condition. The observations were based on a simulated human factors validation for two different medical products (device and software). The main basis for data analysis was an observation protocol in which the individual actions to be performed were categorized by the two observer groups according to classification derived from FDA’s Human Factors Guidance. Five human factors professionals in the lab-based and the remote-based setup respectively, with prior knowledge about both products in focus of the evaluation, generated the protocol data. The datasets from the lab-based and the remote-based observations were compared regarding their level of agreement. In addition, the quality of observations was assessed by comparing them to a sample solution, including the effect of the setups on the observers’ cognitive workload. Descriptively assessed, any-two agreement and Cohen´s κ calculations showed differences in observations of the lab-based vs. remote-based setup that became smaller when potentially critical actions were in focus. For the medical software less than 10% of the observations differed compared to around 15% for the medical device considering only critical use errors. The quality of observations was slightly higher when the observer was on-site, and better overall for the medical device compared to medical software regarding percentual agreement with the sample solution. Interestingly, a particularly high cognitive workload occurred when the medical device was observed remotely comparing the total NASA-TLX scores between the setups. Findings do not seem to strongly favor either lab-based or remote-based setups. For the medical device, the lab-based observation seemed to be more appropriate while for the medical software the result is not clear. However, remote observation performed better for the medical software than for the medical device. Observing the evaluation remotely and verifying the results with the help of video recordings detected the highest number of critical use errors. Overall, initial results from the feasibility study highlight the potential of remote evaluations. However, more research is needed to validate the results with a larger sample size and determine the influencing factors that might favor remote vs. lab-based approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Applying User Interface Profiles to Ensure Safe Remote Control within the Open Networked Operating Room in accordance with ISO IEEE 11073 SDC AR-Coach: Using Augmented Reality (AR) for Real-Time Clinical Guidance During Medical Emergencies on Deep Space Exploration Missions Artificial Intelligence in Healthcare: The Explainability Ethical Paradox Hypothesis on the supreme value criteria of the global civilization Preliminary wear trial of anisotropic textile brace designed for adolescent idiopathic scoliosis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1