Lina Bareišytė , Syl Slatman , Judith Austin , Martin Rosema , Iris van Sintemaartensdijk , Steven Watson , Christina Bode
{"title":"Questionnaires for evaluating virtual reality: A systematic scoping review","authors":"Lina Bareišytė , Syl Slatman , Judith Austin , Martin Rosema , Iris van Sintemaartensdijk , Steven Watson , Christina Bode","doi":"10.1016/j.chbr.2024.100505","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>Virtual reality (VR) is an emerging technology in fields including education and healthcare. A challenge for VR researchers is knowing which VR evaluation instruments exist, and which best align with their research objectives. Therefore, a systematic scoping review was conducted to identify and appraise questionnaires that evaluate VR.</div></div><div><h3>Methods</h3><div>A scoping review across five scientific databases identified articles that described the development of questionnaires that evaluated VR. All identified articles were screened and data about the measured constructs, (psychometric) properties, and availability were extracted.</div></div><div><h3>Results</h3><div>The initial search identified 4461 articles, 151 were full text screened, and 56 were included in the review. In total, seven constructs were measured to evaluate VR, of which presence (<em>n</em> = 26), user experience (<em>n</em> = 15) and motion sickness (<em>n</em> = 6) were most commonly used. However, these constructs were not always clearly defined, and measures of the same construct often differed in their content. Reliability was reported for 34 (59%) questionnaires, evaluations of validity were found in 42 (72%) questionnaires. Moreover, recommendations per construct on most optimal VR questionnaires were proposed.</div></div><div><h3>Discussion</h3><div>A wide range of questionnaires used to evaluate VR were identified. Further, VR-related constructs were reviewed by comparing definitions, exploring questionnaire items, and examining their differences. Where relevant, constructs were divided (e.g. presence was divided into social, self, and spatial), and suitable definitions for each (sub-)construct were given. We provide recommendations for a structured approach of the development of measures to evaluate VR alongside priority areas where new measures are most sorely needed.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"16 ","pages":"Article 100505"},"PeriodicalIF":4.9000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958824001386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction
Virtual reality (VR) is an emerging technology in fields including education and healthcare. A challenge for VR researchers is knowing which VR evaluation instruments exist, and which best align with their research objectives. Therefore, a systematic scoping review was conducted to identify and appraise questionnaires that evaluate VR.
Methods
A scoping review across five scientific databases identified articles that described the development of questionnaires that evaluated VR. All identified articles were screened and data about the measured constructs, (psychometric) properties, and availability were extracted.
Results
The initial search identified 4461 articles, 151 were full text screened, and 56 were included in the review. In total, seven constructs were measured to evaluate VR, of which presence (n = 26), user experience (n = 15) and motion sickness (n = 6) were most commonly used. However, these constructs were not always clearly defined, and measures of the same construct often differed in their content. Reliability was reported for 34 (59%) questionnaires, evaluations of validity were found in 42 (72%) questionnaires. Moreover, recommendations per construct on most optimal VR questionnaires were proposed.
Discussion
A wide range of questionnaires used to evaluate VR were identified. Further, VR-related constructs were reviewed by comparing definitions, exploring questionnaire items, and examining their differences. Where relevant, constructs were divided (e.g. presence was divided into social, self, and spatial), and suitable definitions for each (sub-)construct were given. We provide recommendations for a structured approach of the development of measures to evaluate VR alongside priority areas where new measures are most sorely needed.