Pub Date : 2018-01-01DOI: 10.4018/IJMHCI.2018010103
A. Sievert, A. Witzki, Marco M. Nitzschner
Eye tracking experiments are an important contribution to human computer interaction (HCI) research. Eye movements indicate attention, information processing, and cognitive state. Oculomotor activity is usually captured with high temporal resolution eye tracking systems, which are expensive and not affordable for everyone. Moreover, these systems require specific hard- and software. However, affordable and practical systems are needed especially for applied research concerning mobile HCI in everyday life. This study examined the reliability/validity of low temporal resolution devices by comparing data of a table-mounted system with an electrooculogram. Gaze patterns of twenty participants were recorded while performing a visual reaction and a surveillance task. Statistical analyses showed high consistency between both measurement systems for recorded gaze parameters. These results indicate that data from low temporal resolution eye trackers are sufficient to derive performance related oculomotor parameters and that such solutions present a viable alternative for applied HCI research.
{"title":"Reliability and Validity of Low Temporal Resolution Eye Tracking Systems in Cognitive Performance Tasks","authors":"A. Sievert, A. Witzki, Marco M. Nitzschner","doi":"10.4018/IJMHCI.2018010103","DOIUrl":"https://doi.org/10.4018/IJMHCI.2018010103","url":null,"abstract":"Eye tracking experiments are an important contribution to human computer interaction (HCI) research. Eye movements indicate attention, information processing, and cognitive state. Oculomotor activity is usually captured with high temporal resolution eye tracking systems, which are expensive and not affordable for everyone. Moreover, these systems require specific hard- and software. However, affordable and practical systems are needed especially for applied research concerning mobile HCI in everyday life. This study examined the reliability/validity of low temporal resolution devices by comparing data of a table-mounted system with an electrooculogram. Gaze patterns of twenty participants were recorded while performing a visual reaction and a surveillance task. Statistical analyses showed high consistency between both measurement systems for recorded gaze parameters. These results indicate that data from low temporal resolution eye trackers are sufficient to derive performance related oculomotor parameters and that such solutions present a viable alternative for applied HCI research.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79960901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.4018/IJMHCI.2017100103
A. Chamberlain, Mads Bødker, Adrian Hazzard, D. McGookin, D. Roure, P. Willcox, Konstantinos Papangelis
Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities.
{"title":"Audio Technology and Mobile Human Computer Interaction: From Space and Place, to Social Media, Music, Composition and Creation","authors":"A. Chamberlain, Mads Bødker, Adrian Hazzard, D. McGookin, D. Roure, P. Willcox, Konstantinos Papangelis","doi":"10.4018/IJMHCI.2017100103","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017100103","url":null,"abstract":"Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4018/IJMHCI.2017100103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41935634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.4018/IJMHCI.2017100101
Frederic Kerber, Sven Gehring, A. Krüger, Markus Löchtefeld
The ongoing miniaturization of technology provides the possibility to create more and more powerful devices in smaller form factors. One characteristic of this development is smart wearable devices, such as smartwatches, which open up new possibilities for mobile human-computer interaction. While recent research has revealed that these devices are mainly used to display notifications, the very small screen size can be a hindrance. Consequently, explicit user interaction is, for example, required to browse through notifications to get an overview of them. The authors present an alternative by providing an aggregation and filtering approach to better handle notifications. Furthermore, they investigated several display concepts based on a self-built smartwatch prototype equipped with twelve full-color LEDs to present notifications through ambient illumination. Derived from a user study with twelve participants, the work concludes with guidelines that could be employed when designing notification systems.
{"title":"Adding Expressiveness to Smartwatch Notifications Through Ambient Illumination","authors":"Frederic Kerber, Sven Gehring, A. Krüger, Markus Löchtefeld","doi":"10.4018/IJMHCI.2017100101","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017100101","url":null,"abstract":"The ongoing miniaturization of technology provides the possibility to create more and more powerful devices in smaller form factors. One characteristic of this development is smart wearable devices, such as smartwatches, which open up new possibilities for mobile human-computer interaction. While recent research has revealed that these devices are mainly used to display notifications, the very small screen size can be a hindrance. Consequently, explicit user interaction is, for example, required to browse through notifications to get an overview of them. The authors present an alternative by providing an aggregation and filtering approach to better handle notifications. Furthermore, they investigated several display concepts based on a self-built smartwatch prototype equipped with twelve full-color LEDs to present notifications through ambient illumination. Derived from a user study with twelve participants, the work concludes with guidelines that could be employed when designing notification systems.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85432705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inferring Intent and Action from Gaze in Naturalistic Behavior: A Review","authors":"Kristian Lukander, M. Toivanen, K. Puolamäki","doi":"10.4018/IJMHCI.2017100104","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017100104","url":null,"abstract":"Weconstantlymoveourgazetogatheracutevisualinformationfromourenvironment.Conversely,as originallyshownbyYarbusinhisseminalwork,theelicitedgazepatternsholdinformationoverour changingattentionalfocuswhileperformingatask.Recently,theproliferationofmachinelearning algorithmshasallowedtheresearchcommunitytotesttheideaofinferring,orevenpredictingaction andintentfromgazebehaviour.Theon-goingminiaturizationofgazetrackingtechnologiestoward pervasivewearablesolutionsallowsstudyinginferencealsoineverydayactivitiesoutsideresearch laboratories.Thispaperscopestheemergingfieldandreviewsstudiesfocusingontheinferenceof intentandactioninnaturalisticbehaviour.Whilethetask-specificnatureofgazebehavior,andthe variabilityinnaturalisticsetupspresentchallenges,gaze-basedinferenceholdsaclearpromisefor machine-basedunderstandingofhumanintentandfutureinteractivesolutions. KeywoRdS Eye Movements, Gaze Tracking, Inference, Intent Modeling, Scoping Study, Task Modeling","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85825467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using a Vibrotactile Seat for Facilitating the Handover of Control during Automated Driving","authors":"Ariel Telpaz, Brian Rhindress, I. Zelman, Omer Tsimhoni","doi":"10.4018/ijmhci.2017070102","DOIUrl":"https://doi.org/10.4018/ijmhci.2017070102","url":null,"abstract":"Studieshavefoundthatdriverstendtoneglecttheirsurroundingtrafficduringautomateddriving. Thismayleadtoalateandinefficientresumptionofcontrolincaseofhandoverofthedrivingtaskto thedriver.Theauthorsevaluatedtheeffectivenessofavibrotactileseatdisplayingspatialinformation regardingvehiclesapproachingfrombehindtoenhancethedriverpreparednesstothehandoverof control.Asimulatorexperiment,involving26participants,showedthatwhendriverswererequired toregaincontrolofthevehicle,havingavibrotactileseatimprovedspeedandefficiencyofreactions inscenariosrequiringlanechangingimmediatelyfollowingahandover.Inaddition,eye-tracking analysisshowedthattheparticipantshadmoresystematicscanpatternsofthemirrorsinthefirsttwo secondsfollowingthetransitionofcontrolrequest.Interestingly,thiseffectexistsin-spiteofthefinding thatduringautomateddrivingmode,havingavibrotactiledisplayledtofewerglancesattheroad. KeywoRDS Automated Driving HMI, Driving Simulator, Eye Tracking, Handover of Control, Haptic Feedback, Vibrotactile Displays, Vibrotactile Seat","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81728318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.4018/ijmhci.2017070104
Phillip Taylor, N. Griffiths, A. Bhalerao, Zhou Xu, A. Gelencser, T. Popham
Driving is a safety critical task that requires a high level of attention and workload from the driver. Despite this, people often also perform secondary tasks such as eating or using a mobile phone, which increase workload levels and divert cognitive and physical attention from the primary task of driving. If a vehicle is aware that the driver is currently under high workload, the vehicle functionality can be changed in order to minimize any further demand. Traditionally, workload measurements have been performed using intrusive means such as physiological sensors. Another approach may be to use vehicle telemetry data as a performance measure for workload. In this paper, we present the Warwick-JLR Driver Monitoring Dataset (DMD) and analyse it to investigate the feasibility of using vehicle telemetry data for determining the driver workload. We perform a statistical analysis of subjective ratings, physiological data, and vehicle telemetry data collected during a track study. A data mining methodology is then presented to build predictive models using this data, for the driver workload monitoring problem.
{"title":"Investigating the Feasibility of Vehicle Telemetry Data as a Means of Predicting Driver Workload","authors":"Phillip Taylor, N. Griffiths, A. Bhalerao, Zhou Xu, A. Gelencser, T. Popham","doi":"10.4018/ijmhci.2017070104","DOIUrl":"https://doi.org/10.4018/ijmhci.2017070104","url":null,"abstract":"Driving is a safety critical task that requires a high level of attention and workload from the driver. Despite this, people often also perform secondary tasks such as eating or using a mobile phone, which increase workload levels and divert cognitive and physical attention from the primary task of driving. If a vehicle is aware that the driver is currently under high workload, the vehicle functionality can be changed in order to minimize any further demand. Traditionally, workload measurements have been performed using intrusive means such as physiological sensors. Another approach may be to use vehicle telemetry data as a performance measure for workload. In this paper, we present the Warwick-JLR Driver Monitoring Dataset (DMD) and analyse it to investigate the feasibility of using vehicle telemetry data for determining the driver workload. We perform a statistical analysis of subjective ratings, physiological data, and vehicle telemetry data collected during a track study. A data mining methodology is then presented to build predictive models using this data, for the driver workload monitoring problem.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80343406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.4018/ijmhci.2017070103
Ignacio J. Alvarez, Laura Rumbel
{"title":"Skyline: A Platform Towards Scalable UX-Centric In-Vehicle HMI Development","authors":"Ignacio J. Alvarez, Laura Rumbel","doi":"10.4018/ijmhci.2017070103","DOIUrl":"https://doi.org/10.4018/ijmhci.2017070103","url":null,"abstract":"","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72409002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.4018/IJMHCI.2017040101
Missie Smith, Joseph L. Gabbard, G. Burnett, Nadejda Doutcheva
This paper reports on an experiment comparing Head-Up Display HUD and Head-Down Display HDD use while driving in a simulator to explore differences in glance patterns, driving performance, and user preferences. Sixteen participants completed both structured text and semi-structured grid visual search tasks on each display while following a lead vehicle in a motorway highway environment. Participants experienced three levels of complexity low, medium, high for each visual search task, with five repetitions of each level of complexity. Results suggest that the grid task was not sensitive enough to the varying visual demands, while the text task showed significant differences between displays in user preference, perceived workload, and distraction. As complexity increased, HUD use during the text task corresponded with faster performance as compared to the HDD, indicating the potential benefits when using HUDs in the driving context. Furthermore, HUD use was associated with longer sustained glances at the respective display as compared to the HDD, with no differences in driving performance observed. This finding suggests that AR HUDs afford longer glances without negatively affecting the longitudinal and lateral control of the vehicle-a result that has implications for how future researchers should evaluate the visual demands for AR HUDs.
{"title":"The Effects of Augmented Reality Head-Up Displays on Drivers' Eye Scan Patterns, Performance, and Perceptions","authors":"Missie Smith, Joseph L. Gabbard, G. Burnett, Nadejda Doutcheva","doi":"10.4018/IJMHCI.2017040101","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017040101","url":null,"abstract":"This paper reports on an experiment comparing Head-Up Display HUD and Head-Down Display HDD use while driving in a simulator to explore differences in glance patterns, driving performance, and user preferences. Sixteen participants completed both structured text and semi-structured grid visual search tasks on each display while following a lead vehicle in a motorway highway environment. Participants experienced three levels of complexity low, medium, high for each visual search task, with five repetitions of each level of complexity. Results suggest that the grid task was not sensitive enough to the varying visual demands, while the text task showed significant differences between displays in user preference, perceived workload, and distraction. As complexity increased, HUD use during the text task corresponded with faster performance as compared to the HDD, indicating the potential benefits when using HUDs in the driving context. Furthermore, HUD use was associated with longer sustained glances at the respective display as compared to the HDD, with no differences in driving performance observed. This finding suggests that AR HUDs afford longer glances without negatively affecting the longitudinal and lateral control of the vehicle-a result that has implications for how future researchers should evaluate the visual demands for AR HUDs.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86189364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.4018/IJMHCI.2017040104
Marcel Walch, Kristin Mühl, M. Baumann, M. Weber
Autonomous vehicles will need de-escalation strategies to compensate when reaching system limitations. Car-driver handovers can be considered one possible method to deal with system boundaries. The authors suggest a bimodal auditory and visual handover assistant based on user preferences and design principles for automated systems. They conducted a driving simulator study with 30 participants to investigate the take-over performance of drivers. In particular, the authors examined the effect of different warning conditions take-over request only with 4 and 6 seconds time budget vs. an additional pre-cue, which states why the take-over request will follow in different hazardous situations. Their results indicated that all warning conditions were feasible in all situations, although the short time budget 4 seconds was rather challenging and led to a less safe performance. An alert ahead of a take-over request had the positive effect that the participants took over and intervened earlier in relation to the appearance of the take-over request. Overall, the authors' evaluation showed that bimodal warnings composed of textual and iconographic visual displays accompanied by alerting jingles and spoken messages are a promising approach to alert drivers and to ask them to take over.
{"title":"Autonomous Driving: Investigating the Feasibility of Bimodal Take-Over Requests","authors":"Marcel Walch, Kristin Mühl, M. Baumann, M. Weber","doi":"10.4018/IJMHCI.2017040104","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017040104","url":null,"abstract":"Autonomous vehicles will need de-escalation strategies to compensate when reaching system limitations. Car-driver handovers can be considered one possible method to deal with system boundaries. The authors suggest a bimodal auditory and visual handover assistant based on user preferences and design principles for automated systems. They conducted a driving simulator study with 30 participants to investigate the take-over performance of drivers. In particular, the authors examined the effect of different warning conditions take-over request only with 4 and 6 seconds time budget vs. an additional pre-cue, which states why the take-over request will follow in different hazardous situations. Their results indicated that all warning conditions were feasible in all situations, although the short time budget 4 seconds was rather challenging and led to a less safe performance. An alert ahead of a take-over request had the positive effect that the participants took over and intervened earlier in relation to the appearance of the take-over request. Overall, the authors' evaluation showed that bimodal warnings composed of textual and iconographic visual displays accompanied by alerting jingles and spoken messages are a promising approach to alert drivers and to ask them to take over.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74115407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.4018/IJMHCI.2017040103
W. Giang, H. Chen, Birsen Donmez
This work seeks to understand whether the unique features of a smartwatch, compared to a smartphone, mitigate or exacerbate driver distraction due to notifications, and to provide insights about drivers' perceptions of the risks associated with using smartwatches while driving. As smartwatches are gaining popularity among consumers, there is a need to understand how smartwatch use may influence driving performance. Previous driving research has examined voice calling on smartwatches, but not interactions with notifications, a key marketed feature. Engaging with notifications e.g., reading and texting on a handheld device is a known distraction associated with increased crash risks. Two driving simulator studies compared smartwatch to smartphone notifications. Experiment I asked participants to read aloud brief text notifications and Experiment II had participants manually select a response to arithmetic questions presented as notifications. Both experiments investigated the resulting glances to and physical interactions with the devices, as well as self-reported risk perception. Experiment II also investigated driving performance and self-reported knowledge/expectation about legislation surrounding the use of smart devices while driving. Experiment I found that participants were faster to visually engage with the notification on the smartwatch than the smartphone, took longer to finish reading aloud the notifications, and exhibited more glances longer than 1.6 s. Experiment II found that participants took longer to reply to notifications and had longer overall glance durations on the smartwatch than the smartphone, along with longer brake reaction times to lead vehicle braking events. Compared to the no device baseline, both devices increased lane position variability and resulted in higher self-reported perceived risk. Experiment II participants also considered that smartwatch use while driving deserves penalties equal to or less than smartphone use. The findings suggest that smartwatches may have road safety consequences. Given the common view among participants to associate smartwatch use with equal or less traffic penalties than smartphone use, there may be a disconnect between drivers' actual performance and their perceptions about smartwatch use while driving.
{"title":"Smartwatches vs. Smartphones: Notification Engagement while Driving","authors":"W. Giang, H. Chen, Birsen Donmez","doi":"10.4018/IJMHCI.2017040103","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017040103","url":null,"abstract":"This work seeks to understand whether the unique features of a smartwatch, compared to a smartphone, mitigate or exacerbate driver distraction due to notifications, and to provide insights about drivers' perceptions of the risks associated with using smartwatches while driving. As smartwatches are gaining popularity among consumers, there is a need to understand how smartwatch use may influence driving performance. Previous driving research has examined voice calling on smartwatches, but not interactions with notifications, a key marketed feature. Engaging with notifications e.g., reading and texting on a handheld device is a known distraction associated with increased crash risks. Two driving simulator studies compared smartwatch to smartphone notifications. Experiment I asked participants to read aloud brief text notifications and Experiment II had participants manually select a response to arithmetic questions presented as notifications. Both experiments investigated the resulting glances to and physical interactions with the devices, as well as self-reported risk perception. Experiment II also investigated driving performance and self-reported knowledge/expectation about legislation surrounding the use of smart devices while driving. Experiment I found that participants were faster to visually engage with the notification on the smartwatch than the smartphone, took longer to finish reading aloud the notifications, and exhibited more glances longer than 1.6 s. Experiment II found that participants took longer to reply to notifications and had longer overall glance durations on the smartwatch than the smartphone, along with longer brake reaction times to lead vehicle braking events. Compared to the no device baseline, both devices increased lane position variability and resulted in higher self-reported perceived risk. Experiment II participants also considered that smartwatch use while driving deserves penalties equal to or less than smartphone use. The findings suggest that smartwatches may have road safety consequences. Given the common view among participants to associate smartwatch use with equal or less traffic penalties than smartphone use, there may be a disconnect between drivers' actual performance and their perceptions about smartwatch use while driving.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76629827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}