S. Krome, D. Goedicke, T. Matarazzo, Zimeng Zhu, Zhenwei Zhang, J.D. Zamfirescu-Pereira, Wendy Ju
Top-down simulations of autonomous intersections neglect considerations for the human experience of being in cars driving through these autonomous intersections. To understand the impact that perspective has on perception of autonomous intersections, we conducted a driving simulator experiment and studied the experience in terms of perception, feelings, and pleasure. Based on this data, we discuss experiential factors of autonomous intersections that are perceived as beneficial or detrimental for the future driver. Furthermore, we present what the change of perspective implies for designing intersection models, future in-car interfaces and simulation techniques.
{"title":"How People Experience Autonomous Intersections: Taking a First-Person Perspective","authors":"S. Krome, D. Goedicke, T. Matarazzo, Zimeng Zhu, Zhenwei Zhang, J.D. Zamfirescu-Pereira, Wendy Ju","doi":"10.1145/3342197.3344520","DOIUrl":"https://doi.org/10.1145/3342197.3344520","url":null,"abstract":"Top-down simulations of autonomous intersections neglect considerations for the human experience of being in cars driving through these autonomous intersections. To understand the impact that perspective has on perception of autonomous intersections, we conducted a driving simulator experiment and studied the experience in terms of perception, feelings, and pleasure. Based on this data, we discuss experiential factors of autonomous intersections that are perceived as beneficial or detrimental for the future driver. Furthermore, we present what the change of perspective implies for designing intersection models, future in-car interfaces and simulation techniques.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124023408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper gives an overview of the ten-year development of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related research. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driving, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.
{"title":"From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI","authors":"Jackie Ayoub, S. Bao","doi":"10.1145/3342197.3344529","DOIUrl":"https://doi.org/10.1145/3342197.3344529","url":null,"abstract":"This paper gives an overview of the ten-year development of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related research. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driving, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132299390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander G. Mirnig, Magdalena Gärtner, Vivien Wallner, Sandra Trösterer, Alexander Meschtscherjakov, M. Tscheligi
Riding a highly automated bus has the potential to bring about a set of novel challenges for the passenger. As there is no human driver present, there is no one to talk to regarding driving direction, stops, or delays. This lack of a human element is likely to cause a stronger reliance on the in-vehicle means of communication, such as displays. In this paper, we present the results from a qualitative study, in which we tested three different on-screen visualizations for passenger information during an automated bus trip. The designs focused primarily on signaling the next stop and proper time to request the bus to stop in absence of a human driver. We found that adding geo-spatial details can easily confuse more than help and that the absence of a human driver makes passengers feel more insecure about being able to exit at the right stop. Thus, passengers are less receptive for visual cues signaling upcoming stops and more likely to input stop requests immediately upon leaving the station.
{"title":"Where Does It Go?: A Study on Visual On-Screen Designs for Exit Management in an Automated Shuttle Bus","authors":"Alexander G. Mirnig, Magdalena Gärtner, Vivien Wallner, Sandra Trösterer, Alexander Meschtscherjakov, M. Tscheligi","doi":"10.1145/3342197.3344541","DOIUrl":"https://doi.org/10.1145/3342197.3344541","url":null,"abstract":"Riding a highly automated bus has the potential to bring about a set of novel challenges for the passenger. As there is no human driver present, there is no one to talk to regarding driving direction, stops, or delays. This lack of a human element is likely to cause a stronger reliance on the in-vehicle means of communication, such as displays. In this paper, we present the results from a qualitative study, in which we tested three different on-screen visualizations for passenger information during an automated bus trip. The designs focused primarily on signaling the next stop and proper time to request the bus to stop in absence of a human driver. We found that adding geo-spatial details can easily confuse more than help and that the absence of a human driver makes passengers feel more insecure about being able to exit at the right stop. Thus, passengers are less receptive for visual cues signaling upcoming stops and more likely to input stop requests immediately upon leaving the station.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130831568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Priscilla N. Y. Wong, Duncan P. Brumby, Harsha Vardhan Ramesh Babu, Kota Kobayashi
Automated driving will mean that people can engage in other activities and an important concern will be how to alert the driver to critical events that require their intervention. This study evaluates how various levels of assertiveness of voice command in a semi-AV and different degrees of immersion of a non-driving task may affect people's attention on the road. In a simulated set-up, 20 participants were required to execute actions on the steering wheel when a voice command was given while playing a mobile game. Regardless of how immersed the driver was in the game, a more assertive voice resulted in faster reaction time to the instructions and was perceived as more urgent than a less assertive voice. Automotive systems should use an assertive voice to effectively grab people's attention. This is effective even when they are engaged in an immersive secondary task.
{"title":"Voices in Self-Driving Cars Should be Assertive to More Quickly Grab a Distracted Driver's Attention","authors":"Priscilla N. Y. Wong, Duncan P. Brumby, Harsha Vardhan Ramesh Babu, Kota Kobayashi","doi":"10.1145/3342197.3344535","DOIUrl":"https://doi.org/10.1145/3342197.3344535","url":null,"abstract":"Automated driving will mean that people can engage in other activities and an important concern will be how to alert the driver to critical events that require their intervention. This study evaluates how various levels of assertiveness of voice command in a semi-AV and different degrees of immersion of a non-driving task may affect people's attention on the road. In a simulated set-up, 20 participants were required to execute actions on the steering wheel when a voice command was given while playing a mobile game. Regardless of how immersed the driver was in the game, a more assertive voice resulted in faster reaction time to the instructions and was perceived as more urgent than a less assertive voice. Automotive systems should use an assertive voice to effectively grab people's attention. This is effective even when they are engaged in an immersive secondary task.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133124331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Public opinion suggests that it is still unclear how people will react when automated vehicles (AVs) emerge on the roads. Fatal accidents involving AVs have received wide media attention, possibly disproportionate to their frequency. How does the framing of such stories affect public perceptions of AVs? Few drivers have encountered AVs, but how do they imagine interacting with AVs in the near future? This survey study with 600 UK and Hong Kong drivers addressed these two questions. After reading news 'vignettes' reporting an imagined car crash, respondents presented with subjective information perceived AVs as less safe than those presented with factual information. We draw implications for news media framing effects to counter negative newsflow with factual information. Respondents were presented with another imagined interaction with human-driven and AVs and did not differentiate between the two. Results from other variables e.g., first and third person framing, and cultural differences are also reported.
{"title":"Who Has The Right of Way, Automated Vehicles or Drivers?: Multiple Perspectives in Safety, Negotiation and Trust","authors":"Priscilla N. Y. Wong","doi":"10.1145/3342197.3344536","DOIUrl":"https://doi.org/10.1145/3342197.3344536","url":null,"abstract":"Public opinion suggests that it is still unclear how people will react when automated vehicles (AVs) emerge on the roads. Fatal accidents involving AVs have received wide media attention, possibly disproportionate to their frequency. How does the framing of such stories affect public perceptions of AVs? Few drivers have encountered AVs, but how do they imagine interacting with AVs in the near future? This survey study with 600 UK and Hong Kong drivers addressed these two questions. After reading news 'vignettes' reporting an imagined car crash, respondents presented with subjective information perceived AVs as less safe than those presented with factual information. We draw implications for news media framing effects to counter negative newsflow with factual information. Respondents were presented with another imagined interaction with human-driven and AVs and did not differentiate between the two. Results from other variables e.g., first and third person framing, and cultural differences are also reported.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133984184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marcel Walch, Marcel Woide, Kristin Mühl, M. Baumann, M. Weber
Automated vehicles will eventually operate safely without the need of human supervision and fallback, nevertheless, scenarios will remain that are managed more efficiently by a human driver. A common approach to overcome such weaknesses is to shift control to the driver. Control transitions are challenging due to human factor issues like post-automation behavior changes. We thus investigated cooperative overtaking wherein driver and vehicle complement each other: drivers support the vehicle to perceive the traffic scene and decide when to execute a maneuver whereas the system steers. We explored two maneuver approval and cancel techniques on touchscreens, and show that cooperative overtaking is feasible, both interaction techniques provide good usability and were preferred over manual maneuver execution. However, participants disregarded rear traffic in more complex situations. Consequently, system weaknesses can be overcome with cooperation, but drivers should be assisted by an adaptive system.
{"title":"Cooperative Overtaking: Overcoming Automated Vehicles' Obstructed Sensor Range via Driver Help","authors":"Marcel Walch, Marcel Woide, Kristin Mühl, M. Baumann, M. Weber","doi":"10.1145/3342197.3344531","DOIUrl":"https://doi.org/10.1145/3342197.3344531","url":null,"abstract":"Automated vehicles will eventually operate safely without the need of human supervision and fallback, nevertheless, scenarios will remain that are managed more efficiently by a human driver. A common approach to overcome such weaknesses is to shift control to the driver. Control transitions are challenging due to human factor issues like post-automation behavior changes. We thus investigated cooperative overtaking wherein driver and vehicle complement each other: drivers support the vehicle to perceive the traffic scene and decide when to execute a maneuver whereas the system steers. We explored two maneuver approval and cancel techniques on touchscreens, and show that cooperative overtaking is feasible, both interaction techniques provide good usability and were preferred over manual maneuver execution. However, participants disregarded rear traffic in more complex situations. Consequently, system weaknesses can be overcome with cooperation, but drivers should be assisted by an adaptive system.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128432201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Driving simulators are necessary for evaluating automotive technology for human users. While they can vary in terms of their fidelity, it is essential that users experience minimal simulator sickness and high presence in them. In this paper, we present two experiments that investigate how a virtual driving simulation system could be visually presented within a real vehicle, which moves on a test track but displays a virtual environment. Specifically, we contrasted display presentation of the simulation using either head-mounted displays (HMDs) or fixed displays in the vehicle itself. Overall, we find that fixed displays induced less simulator sickness than HMDs. Neither HMDs or fixed displays induced a stronger presence in our implementation, even when the field-of-view of the fixed display was extended. We discuss the implications of this, particular in the context of scenarios that could induce considerable motion sickness, such as testing non-driving activities in automated vehicles.
{"title":"Projection Displays Induce Less Simulator Sickness than Head-Mounted Displays in a Real Vehicle Driving Simulator","authors":"T. Benz, B. Riedl, L. Chuang","doi":"10.1145/3342197.3344515","DOIUrl":"https://doi.org/10.1145/3342197.3344515","url":null,"abstract":"Driving simulators are necessary for evaluating automotive technology for human users. While they can vary in terms of their fidelity, it is essential that users experience minimal simulator sickness and high presence in them. In this paper, we present two experiments that investigate how a virtual driving simulation system could be visually presented within a real vehicle, which moves on a test track but displays a virtual environment. Specifically, we contrasted display presentation of the simulation using either head-mounted displays (HMDs) or fixed displays in the vehicle itself. Overall, we find that fixed displays induced less simulator sickness than HMDs. Neither HMDs or fixed displays induced a stronger presence in our implementation, even when the field-of-view of the fixed display was extended. We discuss the implications of this, particular in the context of scenarios that could induce considerable motion sickness, such as testing non-driving activities in automated vehicles.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114831933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nataliya Kosmyna, Caitlin Morris, Thanh Nguyen, Sebastian Zepf, Javier Hernández, P. Maes
Several research projects have recently explored the use of physiological sensors such as electroencephalography (EEG) or electrooculography (EOG) to measure the engagement and vigilance of a user in context of car driving. However, these systems still suffer from limitations such as an absence of a socially acceptable form-factor and use of impractical, gel-based electrodes. We present AttentivU, a device using both EEG and EOG for real-time monitoring of physiological data. The device is designed as a socially acceptable pair of glasses and employs silver electrodes. It also supports real-time delivery of feedback in the form of an auditory signal via a bone conduction speaker embedded in the glasses. A detailed description of the hardware design and proof of concept prototype is provided, as well as preliminary data collected from 20 users performing a driving task in a simulator in order to evaluate the signal quality of the physiological data.
{"title":"AttentivU: Designing EEG and EOG Compatible Glasses for Physiological Sensing and Feedback in the Car","authors":"Nataliya Kosmyna, Caitlin Morris, Thanh Nguyen, Sebastian Zepf, Javier Hernández, P. Maes","doi":"10.1145/3342197.3344516","DOIUrl":"https://doi.org/10.1145/3342197.3344516","url":null,"abstract":"Several research projects have recently explored the use of physiological sensors such as electroencephalography (EEG) or electrooculography (EOG) to measure the engagement and vigilance of a user in context of car driving. However, these systems still suffer from limitations such as an absence of a socially acceptable form-factor and use of impractical, gel-based electrodes. We present AttentivU, a device using both EEG and EOG for real-time monitoring of physiological data. The device is designed as a socially acceptable pair of glasses and employs silver electrodes. It also supports real-time delivery of feedback in the form of an auditory signal via a bone conduction speaker embedded in the glasses. A detailed description of the hardware design and proof of concept prototype is provided, as well as preliminary data collected from 20 users performing a driving task in a simulator in order to evaluate the signal quality of the physiological data.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127831991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated vehicles (AVs) introduce a new challenge to human-computer interaction (HCI): pedestrians are no longer able to communicate with human drivers. Hence, new HCI designs need to fill this gap. This work presents the implementation and comparison of different interaction concepts in virtual reality (VR). They were derived after an analysis of 28 works from research and industry, which were classified into five groups according to their complexity and the type of communication. We implemented one concept per group for a within-subject experiment in VR. For each concept, we varied if the AV is going to stop and how early it starts to activate its display. We observed effects on safety, trust, and user experience. A good concept displays information on the street, uses unambiguous signals (e.g., green lights) and has high visibility. Additional feedback, such as continuously showing the recognized pedestrian's location, seem to be unnecessary and may irritate.
{"title":"How Should Automated Vehicles Interact with Pedestrians?: A Comparative Analysis of Interaction Concepts in Virtual Reality","authors":"Andreas Löcken, Carmen Golling, A. Riener","doi":"10.1145/3342197.3344544","DOIUrl":"https://doi.org/10.1145/3342197.3344544","url":null,"abstract":"Automated vehicles (AVs) introduce a new challenge to human-computer interaction (HCI): pedestrians are no longer able to communicate with human drivers. Hence, new HCI designs need to fill this gap. This work presents the implementation and comparison of different interaction concepts in virtual reality (VR). They were derived after an analysis of 28 works from research and industry, which were classified into five groups according to their complexity and the type of communication. We implemented one concept per group for a within-subject experiment in VR. For each concept, we varied if the AV is going to stop and how early it starts to activate its display. We observed effects on safety, trust, and user experience. A good concept displays information on the street, uses unambiguous signals (e.g., green lights) and has high visibility. Additional feedback, such as continuously showing the recognized pedestrian's location, seem to be unnecessary and may irritate.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122225444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Large, Kyle Harrington, G. Burnett, J. Luton, Peter Thomas, P. Bennett
Recognising that one of the aims of conversation is to build, maintain and strengthen positive relationships with others, the study explores whether passengers in an autonomous vehicle display similar behaviour during transactions with an on-board conversational agent-interface; moreover, whether related attributes (e.g. trust) transcend to the vehicle itself. Employing a counterbalanced, within-subjects design, thirty-four participants were transported in a self-driving pod using an expansive testing arena. Participants undertook three journeys with an anthropomorphic agent-interlocutor (via Wizard-of-Oz), a voice-command interface, or a traditional touch-surface; each delivered equivalent task-related information. Results show that the agent-interlocutor was the most preferred interface, attracting the highest ratings of trust, and significantly enhancing the pleasure and sense of control over the journey experience, despite the inclusion of 'trust challenges' as part of the design. The findings can help support the design and development of in-vehicle agent-based voice interfaces to enhance trust and user experience in autonomous cars.
{"title":"To Please in a Pod: Employing an Anthropomorphic Agent-Interlocutor to Enhance Trust and User Experience in an Autonomous, Self-Driving Vehicle","authors":"D. Large, Kyle Harrington, G. Burnett, J. Luton, Peter Thomas, P. Bennett","doi":"10.1145/3342197.3344545","DOIUrl":"https://doi.org/10.1145/3342197.3344545","url":null,"abstract":"Recognising that one of the aims of conversation is to build, maintain and strengthen positive relationships with others, the study explores whether passengers in an autonomous vehicle display similar behaviour during transactions with an on-board conversational agent-interface; moreover, whether related attributes (e.g. trust) transcend to the vehicle itself. Employing a counterbalanced, within-subjects design, thirty-four participants were transported in a self-driving pod using an expansive testing arena. Participants undertook three journeys with an anthropomorphic agent-interlocutor (via Wizard-of-Oz), a voice-command interface, or a traditional touch-surface; each delivered equivalent task-related information. Results show that the agent-interlocutor was the most preferred interface, attracting the highest ratings of trust, and significantly enhancing the pleasure and sense of control over the journey experience, despite the inclusion of 'trust challenges' as part of the design. The findings can help support the design and development of in-vehicle agent-based voice interfaces to enhance trust and user experience in autonomous cars.","PeriodicalId":244325,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129203124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}