Navigating routes using navigation systems while using Personal Mobility Vehicles (PMVs) like bikes or scooters can lead to visual distraction in outdoor environments, creating possibilities for an accident. This article proposes a new navigation system called NaVibar for PMVs that uses vibrotactile feedback on the handlebar to enhance route information delivery and reduce visual distraction. The study aims to answer four research questions about visual distraction, route recognition, mental workload, and usability. The results of the study showed that vibrotactile feedback can be an effective and useful addition to the PMVs navigation system, reducing visual distraction, and enhancing the user experience. Also, vibrotactile feedback did not affect the participants' route recognition, but it positively affected the participants’ lower workload levels. Therefore, our study demonstrates that the addition of vibrotactile feedback could enhance the usability and safety of PMV navigation systems.
{"title":"Feeling Your Way with Navibar: A Navigation System using Vibrotactile Feedback for Personal Mobility Vehicle Users","authors":"Mungyeong Choe, Ajit Gopal, Abdulmajid S. Badahdah, Esha Mahendran, Darrian Burnett, Myounghoon Jeon","doi":"10.1177/21695067231194994","DOIUrl":"https://doi.org/10.1177/21695067231194994","url":null,"abstract":"Navigating routes using navigation systems while using Personal Mobility Vehicles (PMVs) like bikes or scooters can lead to visual distraction in outdoor environments, creating possibilities for an accident. This article proposes a new navigation system called NaVibar for PMVs that uses vibrotactile feedback on the handlebar to enhance route information delivery and reduce visual distraction. The study aims to answer four research questions about visual distraction, route recognition, mental workload, and usability. The results of the study showed that vibrotactile feedback can be an effective and useful addition to the PMVs navigation system, reducing visual distraction, and enhancing the user experience. Also, vibrotactile feedback did not affect the participants' route recognition, but it positively affected the participants’ lower workload levels. Therefore, our study demonstrates that the addition of vibrotactile feedback could enhance the usability and safety of PMV navigation systems.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"85 3 1","pages":"2233 - 2240"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139344360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audio Augmented Reality (AAR) applications are gaining traction, especially for entertainment purposes. To that extent, the current study explored its use and effectiveness in enhancing art gallery visitors’ experiences. Four paintings were selected and sonified using the Jython algorithm to produce computer generated music (Basic AAR); the audio was then further enhanced with traditional music by a musician (Enhanced AAR). Twenty-six participants experienced each painting in Basic, Enhanced, and No AAR condition. Results show that AAR cues had a significant effect on participants’ subjective feedback towards the paintings. Sentiment Analysis shows that participants mentioned significantly more positive words from Enhanced AAR than the others. Enhanced AAR also made participants express a sense of immersion, whereas Basic AAR made them concentrate more on forlorn aspects of the paintings. Findings from this study suggest ways to improve and customize AAR cues for different painting styles, and indicate the need for multi-modal augmentations.
{"title":"Enhancing Art Gallery Visitors' Experiences through Audio Augmented Reality Technology","authors":"Abhraneil Dam, Yeaji Lee, Arsh Siddiqui, Wallace Santos Lages, Myounghoon Jeon","doi":"10.1177/21695067231192706","DOIUrl":"https://doi.org/10.1177/21695067231192706","url":null,"abstract":"Audio Augmented Reality (AAR) applications are gaining traction, especially for entertainment purposes. To that extent, the current study explored its use and effectiveness in enhancing art gallery visitors’ experiences. Four paintings were selected and sonified using the Jython algorithm to produce computer generated music (Basic AAR); the audio was then further enhanced with traditional music by a musician (Enhanced AAR). Twenty-six participants experienced each painting in Basic, Enhanced, and No AAR condition. Results show that AAR cues had a significant effect on participants’ subjective feedback towards the paintings. Sentiment Analysis shows that participants mentioned significantly more positive words from Enhanced AAR than the others. Enhanced AAR also made participants express a sense of immersion, whereas Basic AAR made them concentrate more on forlorn aspects of the paintings. Findings from this study suggest ways to improve and customize AAR cues for different painting styles, and indicate the need for multi-modal augmentations.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"70 1","pages":"971 - 977"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139345045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231208171
Jose A. Calvo, Jesse Eisert, Laura Sandt, David Kidd, Chris Monk, B. Dadashova, Charlie Klauer
Driving automation and vulnerable road users are two pieces of the enormous puzzle that is roadway safety. Ideally, driving automation will improve the safety of vulnerable road users. However, more research needed to understand the effects driving automation will have on the safety of vulnerable road users. In this panel we will examine the relationship between driving automation and vulnerable road users from several different perspectives. Regulatory and research initiatives will be presented, lessons that can be learned from existing technology will be examined, and questions of equitable solutions will be raised. These interdisciplinary experts are brought together with the audience to discuss the research needs, possible effects of driving automation implementation on vulnerable road users, and to try and determine just how well these two pieces of roadway safety fit together.
{"title":"Driving Automation and Vulnerable Road Users: Peanut Butter & Jelly or Oil & Water?","authors":"Jose A. Calvo, Jesse Eisert, Laura Sandt, David Kidd, Chris Monk, B. Dadashova, Charlie Klauer","doi":"10.1177/21695067231208171","DOIUrl":"https://doi.org/10.1177/21695067231208171","url":null,"abstract":"Driving automation and vulnerable road users are two pieces of the enormous puzzle that is roadway safety. Ideally, driving automation will improve the safety of vulnerable road users. However, more research needed to understand the effects driving automation will have on the safety of vulnerable road users. In this panel we will examine the relationship between driving automation and vulnerable road users from several different perspectives. Regulatory and research initiatives will be presented, lessons that can be learned from existing technology will be examined, and questions of equitable solutions will be raised. These interdisciplinary experts are brought together with the audience to discuss the research needs, possible effects of driving automation implementation on vulnerable road users, and to try and determine just how well these two pieces of roadway safety fit together.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"14 1","pages":"2530 - 2533"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139346053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231192243
Xialou Bai, Celia Henderson, Dylan H. Hewitt, Zachary Traylor, Ziyang Xie, Chang Nam
In this study, we sought to compare the mental workload of eating/drinking and phone use while driving. This was done by using Micro Saint Sharp to model a simulated driving task that included stop lights combined with either eating and drinking or a phone call. We hypothesized that the mental workload for phone use and eating/drinking would be the same, as literature suggests that eating while driving can be equally dangerous. Results show that eating and drinking were associated with a lower mental workload than phone use, and both eating and drinking are associated with significantly higher workload than baseline. This research has the potential to inform future legislation regarding driver safety.
在这项研究中,我们试图比较开车时吃东西/喝酒和打电话的脑力劳动负荷。为此,我们使用 Micro Saint Sharp 对模拟驾驶任务进行了建模,其中包括红绿灯、吃喝或打电话。我们假设打电话和吃东西/喝酒的脑力劳动负荷是相同的,因为文献表明开车时吃东西同样危险。结果表明,与使用电话相比,进食和饮酒的脑力劳动负荷较低,而与基线相比,进食和饮酒的脑力劳动负荷明显较高。这项研究有可能为未来有关驾驶安全的立法提供参考。
{"title":"Are Eating and Phone Use Equally Distracting? A Simulated Driving Model Comparison","authors":"Xialou Bai, Celia Henderson, Dylan H. Hewitt, Zachary Traylor, Ziyang Xie, Chang Nam","doi":"10.1177/21695067231192243","DOIUrl":"https://doi.org/10.1177/21695067231192243","url":null,"abstract":"In this study, we sought to compare the mental workload of eating/drinking and phone use while driving. This was done by using Micro Saint Sharp to model a simulated driving task that included stop lights combined with either eating and drinking or a phone call. We hypothesized that the mental workload for phone use and eating/drinking would be the same, as literature suggests that eating while driving can be equally dangerous. Results show that eating and drinking were associated with a lower mental workload than phone use, and both eating and drinking are associated with significantly higher workload than baseline. This research has the potential to inform future legislation regarding driver safety.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"15 1","pages":"1606 - 1610"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139346423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231193654
Nadia Fereydooni, Sidney T. Scott-Sharoni, Bruce N. Walker, John K. Lenneman, Benjamin P. Austin, Takeshi Yoshida
The manner an HMI presents information can affect a user's experience in highly automated vehicles. This online experimental study analyzed how informing participants of future or current events and vehicle behaviors in various modalities affected trust and comfort. We observed that presenting users with information about upcoming road events and vehicle maneuvers lead to greater user trust and comfort, but only when also alerting users about the immediately occurring events. Psychological and human-robot interaction theories provided explanations for why a combination of current and future alerts may result in higher user trust. Understanding how content temporality impacts the individual has design implications that may potentially lead to an increased partnership and optimized interaction between a person and their highly automated vehicle.
{"title":"The Impact of Content Temporality and Modality in Automotive User Interface on Trust and Comfort","authors":"Nadia Fereydooni, Sidney T. Scott-Sharoni, Bruce N. Walker, John K. Lenneman, Benjamin P. Austin, Takeshi Yoshida","doi":"10.1177/21695067231193654","DOIUrl":"https://doi.org/10.1177/21695067231193654","url":null,"abstract":"The manner an HMI presents information can affect a user's experience in highly automated vehicles. This online experimental study analyzed how informing participants of future or current events and vehicle behaviors in various modalities affected trust and comfort. We observed that presenting users with information about upcoming road events and vehicle maneuvers lead to greater user trust and comfort, but only when also alerting users about the immediately occurring events. Psychological and human-robot interaction theories provided explanations for why a combination of current and future alerts may result in higher user trust. Understanding how content temporality impacts the individual has design implications that may potentially lead to an increased partnership and optimized interaction between a person and their highly automated vehicle.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"25 1","pages":"1971 - 1976"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139346813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231205573
Monica L. H. Jones, Sheila M. Ebert, Carl S. Miller, Matthew P. Reed
Strength data for children are needed to improve the safety of products. Currently, minimal information on strength is available for children under age 6 years. This paper describes the development of methods to measure functional, task-relevant strength for children ages 24 to 71 months. Strength measurement methods used for adults and older children must be adapted substantially to obtain meaningful data from this younger cohort. This paper discusses the challenges associated with gathering volitional, maximal force-generation capability for this age cohort with attention to applicability, repeatability, and reproducibility.
{"title":"New Methods to Quantify Functional Strength for Young Children: Laboratory and Task Design Considerations","authors":"Monica L. H. Jones, Sheila M. Ebert, Carl S. Miller, Matthew P. Reed","doi":"10.1177/21695067231205573","DOIUrl":"https://doi.org/10.1177/21695067231205573","url":null,"abstract":"Strength data for children are needed to improve the safety of products. Currently, minimal information on strength is available for children under age 6 years. This paper describes the development of methods to measure functional, task-relevant strength for children ages 24 to 71 months. Strength measurement methods used for adults and older children must be adapted substantially to obtain meaningful data from this younger cohort. This paper discusses the challenges associated with gathering volitional, maximal force-generation capability for this age cohort with attention to applicability, repeatability, and reproducibility.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"55 1","pages":"2538 - 2544"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139344025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231192411
Patrick Seebold, Chang S. Nam, Yingchen He
Looming sounds have been shown to influence visual perception, suggesting they may make effective auditory warning tones for use with visual tasks. To explore the use of looming sounds as warning signals and to determine whether the strength of a looming sound will impact its effectiveness, we tested five looming sounds with different amplitude increases as warning tones in a contrast sensitivity task where participants made judgements concerning the orientation of low-contrast sinusoidal gratings. Reaction time, accuracy, and contrast threshold were measured for each sound condition. Our results indicate that accuracy was higher and reaction time was faster when a sound was present compared to when no sound was present, and contrast threshold was significantly lowered in sound trials compared to silent trials. However, there was no difference in accuracy, reaction time, or contrast threshold by strength of looming. These results suggest that while auditory warning sounds do enhance performance on a basic visual task, the benefit was not unique to looming sounds. This experiment will help inform the design of warnings by providing insight into the underlying effects of looming sounds on visual performance.
{"title":"Looming sounds as auditory warnings: Uses for enhancing visual contrast sensitivity?","authors":"Patrick Seebold, Chang S. Nam, Yingchen He","doi":"10.1177/21695067231192411","DOIUrl":"https://doi.org/10.1177/21695067231192411","url":null,"abstract":"Looming sounds have been shown to influence visual perception, suggesting they may make effective auditory warning tones for use with visual tasks. To explore the use of looming sounds as warning signals and to determine whether the strength of a looming sound will impact its effectiveness, we tested five looming sounds with different amplitude increases as warning tones in a contrast sensitivity task where participants made judgements concerning the orientation of low-contrast sinusoidal gratings. Reaction time, accuracy, and contrast threshold were measured for each sound condition. Our results indicate that accuracy was higher and reaction time was faster when a sound was present compared to when no sound was present, and contrast threshold was significantly lowered in sound trials compared to silent trials. However, there was no difference in accuracy, reaction time, or contrast threshold by strength of looming. These results suggest that while auditory warning sounds do enhance performance on a basic visual task, the benefit was not unique to looming sounds. This experiment will help inform the design of warnings by providing insight into the underlying effects of looming sounds on visual performance.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"77 1","pages":"908 - 913"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139345891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231192480
J. Edworthy, Rachael Brown, Connor Wessel
A bicycle bell is an important safety add-on and is used by the vast majority of bicycle users. Traditional bicycle bells make a single sound based on the construction of the bell, ranging from the gentle ring of a bell to the honk of a horn. A new wave of digital bell is now appearing which can make a range of sounds. One of the challenges of a digital bell is the issue as to what sorts of sounds should be made, how they will be responded to, and whether they are perceived as suitable for use in a digital bell. In this paper we present a study on the localizability of a set of digital bicycle bells, and how they are perceived along a range of relevant perceptual and aesthetic dimensions.
{"title":"Where’s that bike? Sounds and metrics for a smart bicycle bell","authors":"J. Edworthy, Rachael Brown, Connor Wessel","doi":"10.1177/21695067231192480","DOIUrl":"https://doi.org/10.1177/21695067231192480","url":null,"abstract":"A bicycle bell is an important safety add-on and is used by the vast majority of bicycle users. Traditional bicycle bells make a single sound based on the construction of the bell, ranging from the gentle ring of a bell to the honk of a horn. A new wave of digital bell is now appearing which can make a range of sounds. One of the challenges of a digital bell is the issue as to what sorts of sounds should be made, how they will be responded to, and whether they are perceived as suitable for use in a digital bell. In this paper we present a study on the localizability of a set of digital bicycle bells, and how they are perceived along a range of relevant perceptual and aesthetic dimensions.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"6 1","pages":"2519 - 2524"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139346795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231192534
Linfeng Wu, Brian Sekelsky, Matthew Peterson, Tyler W. Gampp, Cesar Delgado, Karen B. Chen
Feedback-based iterative refinement is important in the development of any human-computer interface. The present work aims to evaluate and iteratively refine an immersive learning environment called Scale Worlds (SW), delivered via a head-mounted display (HMD). SW is a virtual learning environment encompassing scientific entities of a wide range of sizes that enables students an embodied experience while learning size and scale. Five usability experts performed think aloud while carrying out four interactive tasks in SW and compared three different design options during A/B testing. Improvement features based on the feedback from an earlier SW usability evaluation as well as HMD-specific features were examined. Usability experts completed the post-study system usability questionnaire, the NASA task load index, and a bipolar laddering survey that collected subjective perception of specific SW features. Results show that the progress panel (an improvement feature) was informative while the instructions (another improvement feature) caused clutter. The experts indicated clear usability preferences during A/B testing, which helped resolve three sets of theory-usability conflicts. The overall assessment of SW paved a path for theory-usability balance and provided valuable insights for designing and evaluating usability in immersive virtual learning environments.
{"title":"Scale Worlds: Iterative refinement, evaluation, and theory-usability balance of an immersive virtual learning environment","authors":"Linfeng Wu, Brian Sekelsky, Matthew Peterson, Tyler W. Gampp, Cesar Delgado, Karen B. Chen","doi":"10.1177/21695067231192534","DOIUrl":"https://doi.org/10.1177/21695067231192534","url":null,"abstract":"Feedback-based iterative refinement is important in the development of any human-computer interface. The present work aims to evaluate and iteratively refine an immersive learning environment called Scale Worlds (SW), delivered via a head-mounted display (HMD). SW is a virtual learning environment encompassing scientific entities of a wide range of sizes that enables students an embodied experience while learning size and scale. Five usability experts performed think aloud while carrying out four interactive tasks in SW and compared three different design options during A/B testing. Improvement features based on the feedback from an earlier SW usability evaluation as well as HMD-specific features were examined. Usability experts completed the post-study system usability questionnaire, the NASA task load index, and a bipolar laddering survey that collected subjective perception of specific SW features. Results show that the progress panel (an improvement feature) was informative while the instructions (another improvement feature) caused clutter. The experts indicated clear usability preferences during A/B testing, which helped resolve three sets of theory-usability conflicts. The overall assessment of SW paved a path for theory-usability balance and provided valuable insights for designing and evaluating usability in immersive virtual learning environments.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"111 1","pages":"2382 - 2388"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139343402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/21695067231192409
Sahil Sawant, Pratyusha Joshi, Sarvesh Sawant
This literature review examines the state of usability testing in medical device design, focusing on techniques and instruments used for assessing user-centered design. The analysis includes 30 studies from various fields, such as telemedicine, assistive technology, and healthcare devices. The review discusses different UX analysis techniques, their benefits and their drawbacks. Usability testing is critical in healthcare equipment design, utilizing various approaches and metrics. However, the analysis also highlights significant obstacles that need to be addressed, such as the need for more rigorous testing procedures and the application of user-centered design concepts. Overall, the review underscores the importance of usability testing in creating user-friendly, secure, and efficient medical equipment that benefits both patients and medical professionals.
{"title":"Usability testing of Healthcare Devices: A review of the current UX methods used for usability testing of healthcare devices","authors":"Sahil Sawant, Pratyusha Joshi, Sarvesh Sawant","doi":"10.1177/21695067231192409","DOIUrl":"https://doi.org/10.1177/21695067231192409","url":null,"abstract":"This literature review examines the state of usability testing in medical device design, focusing on techniques and instruments used for assessing user-centered design. The analysis includes 30 studies from various fields, such as telemedicine, assistive technology, and healthcare devices. The review discusses different UX analysis techniques, their benefits and their drawbacks. Usability testing is critical in healthcare equipment design, utilizing various approaches and metrics. However, the analysis also highlights significant obstacles that need to be addressed, such as the need for more rigorous testing procedures and the application of user-centered design concepts. Overall, the review underscores the importance of usability testing in creating user-friendly, secure, and efficient medical equipment that benefits both patients and medical professionals.","PeriodicalId":20673,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society Annual Meeting","volume":"79 1","pages":"1078 - 1083"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139344653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}