With advancements in display technology and increased user demands, automotive manufactures are targeting multiple displays, in various sizes and orientations. They are also adding features and interactions that span across multiple displays in an attempt to engage more users in the car. Today developers and designers use 'rapid prototyping' tools like UXPin and Framer, to quickly validate and test their ideas. But these tools do not support Multi-Screens and Multi-Displays prototyping environments. Also, other prototyping tools like EB Guide and Qt, which support such environments, require extensive software learning and development time that denies the purpose of 'rapid prototyping'. Hence, we propose a solution that would help any automotive designer and/or developers to quickly prototype and test their multi-media solutions for multi-screen vehicle system using any browser based UX tool.
{"title":"Multi-Display Prototyping Using Any Browser Based UX Tools","authors":"Divya Seshadri, John Robert Wilson","doi":"10.1145/3239092.3267416","DOIUrl":"https://doi.org/10.1145/3239092.3267416","url":null,"abstract":"With advancements in display technology and increased user demands, automotive manufactures are targeting multiple displays, in various sizes and orientations. They are also adding features and interactions that span across multiple displays in an attempt to engage more users in the car. Today developers and designers use 'rapid prototyping' tools like UXPin and Framer, to quickly validate and test their ideas. But these tools do not support Multi-Screens and Multi-Displays prototyping environments. Also, other prototyping tools like EB Guide and Qt, which support such environments, require extensive software learning and development time that denies the purpose of 'rapid prototyping'. Hence, we propose a solution that would help any automotive designer and/or developers to quickly prototype and test their multi-media solutions for multi-screen vehicle system using any browser based UX tool.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125946282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Paravantes, Hilary Leehane, Dhanushka Premarathna, Chloe Chung, Dennis L. Kappen
Currently, paramedics are provided information from the 911 operator regarding the emergency faced by the patient/victim in a medical distress. While many distress scenarios for a patient/victim exist, the challenges faced by a victim with a medical problem has to be imagined by the paramedics driving to the emergency situation. Augmenting the emergency scenario on the ambulance instrument panel of the vehicle dashboard with pre-triage scenarios of patients will help to prepare paramedics for an improved patient care protocol on site. Providing the paramedics with patient distress conditions on a real-time basis will help with facilitating the onboarding experience using a syncing of vital statistics, body positioning and level of medical distress.
{"title":"Application of Augmented Reality for Multi-Scale Interactions in Emergency Vehicles","authors":"George Paravantes, Hilary Leehane, Dhanushka Premarathna, Chloe Chung, Dennis L. Kappen","doi":"10.1145/3239092.3267415","DOIUrl":"https://doi.org/10.1145/3239092.3267415","url":null,"abstract":"Currently, paramedics are provided information from the 911 operator regarding the emergency faced by the patient/victim in a medical distress. While many distress scenarios for a patient/victim exist, the challenges faced by a victim with a medical problem has to be imagined by the paramedics driving to the emergency situation. Augmenting the emergency scenario on the ambulance instrument panel of the vehicle dashboard with pre-triage scenarios of patients will help to prepare paramedics for an improved patient care protocol on site. Providing the paramedics with patient distress conditions on a real-time basis will help with facilitating the onboarding experience using a syncing of vital statistics, body positioning and level of medical distress.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"16 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117136498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye tracking technology is becoming an important component of Advanced Driver Assistance Systems. Unfortunately, eye tracking systems require calibration to correctly associate pupil positions with gaze directions, and periodic calibration would be necessary because the accuracy will deteriorate overtime. This routine reduces the usability and practicability of in-vehicle eye tracking technology. We propose an approach to automatically perform real-time eye tracking calibration. We apply an object detection algorithm to continually detect objects that would likely attract the drivers' attention, such as traffic signs and lights. Those are, in turn, used as moving stimuli for the gaze accuracy maintenance procedure. The error vectors between recorded fixations and moving targets are calculated immediately and the weighted average of them is used to compensate for the offset of fixations in real-time. We evaluated our method both on laboratory data and real driving data. The results show that we can effectively reduce the gaze tracking errors.
{"title":"Gaze Tracking Accuracy Maintenance using Traffic Sign Detection","authors":"Shaohua Jia, Do Hyong Koh, Marc Pomplun","doi":"10.1145/3239092.3265947","DOIUrl":"https://doi.org/10.1145/3239092.3265947","url":null,"abstract":"Eye tracking technology is becoming an important component of Advanced Driver Assistance Systems. Unfortunately, eye tracking systems require calibration to correctly associate pupil positions with gaze directions, and periodic calibration would be necessary because the accuracy will deteriorate overtime. This routine reduces the usability and practicability of in-vehicle eye tracking technology. We propose an approach to automatically perform real-time eye tracking calibration. We apply an object detection algorithm to continually detect objects that would likely attract the drivers' attention, such as traffic signs and lights. Those are, in turn, used as moving stimuli for the gaze accuracy maintenance procedure. The error vectors between recorded fixations and moving targets are calculated immediately and the weighted average of them is used to compensate for the offset of fixations in real-time. We evaluated our method both on laboratory data and real driving data. The results show that we can effectively reduce the gaze tracking errors.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131116097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When it comes to highly automated driving, several studies indicate that drivers should be "kept in the loop" when driving in automated mode in order to be better prepared when they need to take over. The challenge lies in finding a way that raises the drivers' situation awareness without annoying the driver, who may be occupied with another task. Ambient light systems using LED visualizations provide a feasible way to draw attention, however, the kind of information that can be communicated is limited. In this paper, we present an exploratory study, where we investigated the semantic quality of different LED patterns (shown on an LED-strip) by capturing experience and associated information contents. Our initial findings show that LED visualizations, which are experienced quite similar at first, can nonetheless be distinctive with regard to the associated information contents.
{"title":"LED Visualizations for Drivers' Attention: An Exploratory Study on Experience and Associated Information Contents","authors":"","doi":"10.1145/3239092.3265966","DOIUrl":"https://doi.org/10.1145/3239092.3265966","url":null,"abstract":"When it comes to highly automated driving, several studies indicate that drivers should be \"kept in the loop\" when driving in automated mode in order to be better prepared when they need to take over. The challenge lies in finding a way that raises the drivers' situation awareness without annoying the driver, who may be occupied with another task. Ambient light systems using LED visualizations provide a feasible way to draw attention, however, the kind of information that can be communicated is limited. In this paper, we present an exploratory study, where we investigated the semantic quality of different LED patterns (shown on an LED-strip) by capturing experience and associated information contents. Our initial findings show that LED visualizations, which are experienced quite similar at first, can nonetheless be distinctive with regard to the associated information contents.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129008216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nadja Schömig, Katharina Wiedemann, Frederik Naujoks, A. Neukum, Bettina Leuchtenberg, Thomas Vöhringer-Kuhnt
This paper investigates whether an Augmented Reality Head-up Display (AR-HUD) supports usability and reduces visual demand during conditionally automated driving. In a driving simulator study, 24 drivers experienced several driving scenarios while driving with conditional automation. The drivers completed one drive with a fully developed HMI designed for automated driving (AD-HMI) that presented visual information in the cluster display and included auditory and tactile output. In another drive, the same drivers were additionally supported by dynamic and static visual feedback via an AR-HUD concept. The latter was preferred by more than 80% of the sample due to the higher information content and the possibility to leave the eyes on the road. Drivers rated the AR concept to be better understandable and more useful. Eye-tracking revealed lower percentage of gazes to the instrument cluster during AR-HUD drives.
{"title":"An Augmented Reality Display for Conditionally Automated Driving","authors":"Nadja Schömig, Katharina Wiedemann, Frederik Naujoks, A. Neukum, Bettina Leuchtenberg, Thomas Vöhringer-Kuhnt","doi":"10.1145/3239092.3265956","DOIUrl":"https://doi.org/10.1145/3239092.3265956","url":null,"abstract":"This paper investigates whether an Augmented Reality Head-up Display (AR-HUD) supports usability and reduces visual demand during conditionally automated driving. In a driving simulator study, 24 drivers experienced several driving scenarios while driving with conditional automation. The drivers completed one drive with a fully developed HMI designed for automated driving (AD-HMI) that presented visual information in the cluster display and included auditory and tactile output. In another drive, the same drivers were additionally supported by dynamic and static visual feedback via an AR-HUD concept. The latter was preferred by more than 80% of the sample due to the higher information content and the possibility to leave the eyes on the road. Drivers rated the AR concept to be better understandable and more useful. Eye-tracking revealed lower percentage of gazes to the instrument cluster during AR-HUD drives.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131853476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Braun, S. Völkel, Heinrich Hußmann, Anna-Katharina Frison, Florian Alt, A. Riener
Situation awareness in highly automated vehicles can help the driver to get back in the loop during a take-over request (TOR). We propose to present the driver a detailed digital representation of situations causing a TOR via a scaled down digital twin of the highway inside the car. The digital twin virtualizes real time traffic information and is displayed before the actual TOR. In the car cockpit an augmented reality headset or a Stereoscopic 3D (S3D) interface can realize the augmentation. As today's hardware has technical limitations, we build an HMD based mock-up. We conducted a user study (N=20) to assess the driver behavior during a TOR. We found that workload decreases and steering performance raise significantly with the proposed system. We argue that the augmentation of the surrounding world in the car helps to improve performance during TOR due to better awareness of the upcoming situation.
{"title":"Early Take-Over Preparation in Stereoscopic 3D","authors":"Michael Braun, S. Völkel, Heinrich Hußmann, Anna-Katharina Frison, Florian Alt, A. Riener","doi":"10.1145/3239092.3265957","DOIUrl":"https://doi.org/10.1145/3239092.3265957","url":null,"abstract":"Situation awareness in highly automated vehicles can help the driver to get back in the loop during a take-over request (TOR). We propose to present the driver a detailed digital representation of situations causing a TOR via a scaled down digital twin of the highway inside the car. The digital twin virtualizes real time traffic information and is displayed before the actual TOR. In the car cockpit an augmented reality headset or a Stereoscopic 3D (S3D) interface can realize the augmentation. As today's hardware has technical limitations, we build an HMD based mock-up. We conducted a user study (N=20) to assess the driver behavior during a TOR. We found that workload decreases and steering performance raise significantly with the proposed system. We argue that the augmentation of the surrounding world in the car helps to improve performance during TOR due to better awareness of the upcoming situation.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132547938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clemens Schartmüller, A. Riener, Philipp Wintersberger
Automated vehicles could omit traditional steering controls to provide larger spaces for driver-passengers or prevent unnecessary interventions. However, manual control could still be necessary to provide manual driving fun or respond to Take-Over requests (TORs). This paper investigates, whether brought-in consumer devices (in this case a 10.2 inch tablet) can act as input alternative to classical steering wheels in TOR situations. Results of a driving simulator study (n=14) confirm that responding to Take-Overs with nomadic devices can reduce response times in imminent transitions during engagement in Non-Driving Related Tasks (NDRTs), as a change of the 'device in hands' is omitted. Further on, subjective scales addressing user experience show that the approach is well accepted. We conclude that nomadic device integration is a crucial pre-requisite for the success of automated vehicles, but for steering input several pivotal issues still need to be solved.
{"title":"Steer-By-WiFi: Lateral Vehicle Control for Take-Overs with Nomadic Devices","authors":"Clemens Schartmüller, A. Riener, Philipp Wintersberger","doi":"10.1145/3239092.3265954","DOIUrl":"https://doi.org/10.1145/3239092.3265954","url":null,"abstract":"Automated vehicles could omit traditional steering controls to provide larger spaces for driver-passengers or prevent unnecessary interventions. However, manual control could still be necessary to provide manual driving fun or respond to Take-Over requests (TORs). This paper investigates, whether brought-in consumer devices (in this case a 10.2 inch tablet) can act as input alternative to classical steering wheels in TOR situations. Results of a driving simulator study (n=14) confirm that responding to Take-Overs with nomadic devices can reduce response times in imminent transitions during engagement in Non-Driving Related Tasks (NDRTs), as a change of the 'device in hands' is omitted. Further on, subjective scales addressing user experience show that the approach is well accepted. We conclude that nomadic device integration is a crucial pre-requisite for the success of automated vehicles, but for steering input several pivotal issues still need to be solved.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122278641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Wintersberger, Brittany E. Noah, Johannes Kraus, Rod McCall, Alexander G. Mirnig, Alexander Kunze, Shailie Thakkar, Bruce N. Walker
This workshop addresses key trust-related issues in the context of automated driving and aims at establishing a common ground for future research. Building on the outcome of the previous workshop at AutoUI 2017, three main aspects are targeted within interactive sessions: (1) Formulation of a comprehensive set of definitions for trust in automated systems; (2) Development of interface approaches for mitigating overtrust and undertrust issues; (3) Identification of an appropriate timing of trust-related cues. Thereby, the current research efforts of both workshop organizers and participants are used as a starting point for several breakout sessions, each addressing one of the three main workshop goals. The outcome of this workshop will provide a benchmark for future work in the field and is also intended to inspire joint publications among the participants.
{"title":"Second Workshop on Trust in the Age of Automated Driving","authors":"Philipp Wintersberger, Brittany E. Noah, Johannes Kraus, Rod McCall, Alexander G. Mirnig, Alexander Kunze, Shailie Thakkar, Bruce N. Walker","doi":"10.1145/3239092.3239099","DOIUrl":"https://doi.org/10.1145/3239092.3239099","url":null,"abstract":"This workshop addresses key trust-related issues in the context of automated driving and aims at establishing a common ground for future research. Building on the outcome of the previous workshop at AutoUI 2017, three main aspects are targeted within interactive sessions: (1) Formulation of a comprehensive set of definitions for trust in automated systems; (2) Development of interface approaches for mitigating overtrust and undertrust issues; (3) Identification of an appropriate timing of trust-related cues. Thereby, the current research efforts of both workshop organizers and participants are used as a starting point for several breakout sessions, each addressing one of the three main workshop goals. The outcome of this workshop will provide a benchmark for future work in the field and is also intended to inspire joint publications among the participants.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128172300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander G. Mirnig, Philipp Wintersberger, Alexander Meschtscherjakov, A. Riener, Susanne CJ Boll
The aim of this half-day workshop is to explore the topic of interaction between automated vehicles and vulnerable road users (VRUs), such as pedestrians or cyclists, in an interactive setting. The workshop is hands-on with no submission of position papers or slots for participant presentations. It aims at deriving knowledge about communication needs across various traffic scenarios resulting in metrics and methodologies for evaluating communication needs by having the participants go through a brief design and evaluation process in a two-step setting. The workshop results will be collected and preserved on a website hosted by the organisers post-workshop.
{"title":"Workshop on Communication between Automated Vehicles and Vulnerable Road Users","authors":"Alexander G. Mirnig, Philipp Wintersberger, Alexander Meschtscherjakov, A. Riener, Susanne CJ Boll","doi":"10.1145/3239092.3239100","DOIUrl":"https://doi.org/10.1145/3239092.3239100","url":null,"abstract":"The aim of this half-day workshop is to explore the topic of interaction between automated vehicles and vulnerable road users (VRUs), such as pedestrians or cyclists, in an interactive setting. The workshop is hands-on with no submission of position papers or slots for participant presentations. It aims at deriving knowledge about communication needs across various traffic scenarios resulting in metrics and methodologies for evaluating communication needs by having the participants go through a brief design and evaluation process in a two-step setting. The workshop results will be collected and preserved on a website hosted by the organisers post-workshop.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116669897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This workshop discusses the balance between safety and productivity as automated vehicles turn into 'mobile offices': spaces where non-driving activities are performed during one's daily commute. Technological developments reduce the active role of the human driver that might, nonetheless, require occasional intervention. To what extent are drivers allowed to dedicate resources to non-driving work-related activities? To address this critical question, the workshop brings together a diverse community of researchers and practitioners that are interested in questions as follows: what non-driving activities are likely to be performed on one's way to work and back; what is a useful taxonomy of these tasks; how can various tasks be studied in experimental settings; and, what are the criteria to assess human performance in automated vehicles. To foster further dialogue, the outcome of the workshop will be an online blog where attendees can contribute their own thoughts: https://medium.com/the-mobile-office.
{"title":"Workshop on The Mobile Office","authors":"","doi":"10.1145/3239092.3239094","DOIUrl":"https://doi.org/10.1145/3239092.3239094","url":null,"abstract":"This workshop discusses the balance between safety and productivity as automated vehicles turn into 'mobile offices': spaces where non-driving activities are performed during one's daily commute. Technological developments reduce the active role of the human driver that might, nonetheless, require occasional intervention. To what extent are drivers allowed to dedicate resources to non-driving work-related activities? To address this critical question, the workshop brings together a diverse community of researchers and practitioners that are interested in questions as follows: what non-driving activities are likely to be performed on one's way to work and back; what is a useful taxonomy of these tasks; how can various tasks be studied in experimental settings; and, what are the criteria to assess human performance in automated vehicles. To foster further dialogue, the outcome of the workshop will be an online blog where attendees can contribute their own thoughts: https://medium.com/the-mobile-office.","PeriodicalId":313474,"journal":{"name":"Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116171218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}