Traditional cognitive testing for older adults can be inaccessible, expensive, and time consuming. The development of computerized cognitive tests (CCTs) has made strides to alleviate such issues with traditional cognitive testing. Self-administered CCTs allow for individuals to test rapidly and conveniently on various devices. However, such tests may not factor in relevant contextual information pertinent to the testing situation (e.g., is the user in a proper environment or context to test?). This dissertation aims to develop a mobile, context-aware cognitive testing system (CACTS) capable of tracking and analyzing contextual information during CCTs. By utilizing mobile device sensors and user input, the proposed context-aware system will capture ambient and behavioral data during testing to compliment user performance results. This research will help provide insight into the contextual factors that are relevant to the user's testing efficacy and performance in CCTs.
{"title":"Mobile context-aware cognitive testing system","authors":"Sean-Ryan Smith","doi":"10.1145/3098279.3119926","DOIUrl":"https://doi.org/10.1145/3098279.3119926","url":null,"abstract":"Traditional cognitive testing for older adults can be inaccessible, expensive, and time consuming. The development of computerized cognitive tests (CCTs) has made strides to alleviate such issues with traditional cognitive testing. Self-administered CCTs allow for individuals to test rapidly and conveniently on various devices. However, such tests may not factor in relevant contextual information pertinent to the testing situation (e.g., is the user in a proper environment or context to test?). This dissertation aims to develop a mobile, context-aware cognitive testing system (CACTS) capable of tracking and analyzing contextual information during CCTs. By utilizing mobile device sensors and user input, the proposed context-aware system will capture ambient and behavioral data during testing to compliment user performance results. This research will help provide insight into the contextual factors that are relevant to the user's testing efficacy and performance in CCTs.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127513520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Braun, N. Broy, Bastian Pfleging, Florian Alt
In this paper we chart a design space for conversational in-vehicle information systems (IVIS). Our work is motivated by the proliferation of speech interfaces in our everyday life, which have already found their way into consumer electronics and will most likely become pervasive in future cars. Our design space is based on expert interviews as well as a comprehensive literature review. We present five core dimensions - assistant, position, dialog design, system capabilities, and driver state - and show in an initial study how these dimensions affect the design of a prototypical IVIS. Design spaces have paved the way for much of the work done in HCI including but not limited to areas such as input and pointing devices, smart phones, displays, and automotive UIs. In a similar way, we expect our design space to aid practitioners in designing future IVIS but also researchers as they explore this young area of research.
{"title":"A design space for conversational in-vehicle information systems","authors":"Michael Braun, N. Broy, Bastian Pfleging, Florian Alt","doi":"10.1145/3098279.3122122","DOIUrl":"https://doi.org/10.1145/3098279.3122122","url":null,"abstract":"In this paper we chart a design space for conversational in-vehicle information systems (IVIS). Our work is motivated by the proliferation of speech interfaces in our everyday life, which have already found their way into consumer electronics and will most likely become pervasive in future cars. Our design space is based on expert interviews as well as a comprehensive literature review. We present five core dimensions - assistant, position, dialog design, system capabilities, and driver state - and show in an initial study how these dimensions affect the design of a prototypical IVIS. Design spaces have paved the way for much of the work done in HCI including but not limited to areas such as input and pointing devices, smart phones, displays, and automotive UIs. In a similar way, we expect our design space to aid practitioners in designing future IVIS but also researchers as they explore this young area of research.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128544861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Design of text entry on small screen devices, e.g. smartwatches, faces two related challenges: trading off a reasonably sized keyboard area against space to display the entered text and the concern over "fat fingers". This paper investigates tap accuracy and revisits layered interfaces to explore a novel layered text entry method. A two part user study identifies preferred typing and reading tilt angles and then investigates variants of a tilting layered keyboard against a standard layout. We show good typing speed (29 wpm) and very high accuracy on the standard layout - contradicting fears of fat-fingers limiting watch text-entry. User feedback is positive towards tilting interaction and we identify ∼14° tilt as a comfortable typing angle. However, layering resulted in slightly slower and more erroneous entry. The paper contributes new data on tilt angles and key offsets for smartwatch text entry and supporting evidence for the suitability of QWERTY on smartwatches.
{"title":"Text entry tap accuracy and exploration of tilt controlled layered interaction on Smartwatches","authors":"Mark D. Dunlop, M. Roper, G. Imperatore","doi":"10.1145/3098279.3098560","DOIUrl":"https://doi.org/10.1145/3098279.3098560","url":null,"abstract":"Design of text entry on small screen devices, e.g. smartwatches, faces two related challenges: trading off a reasonably sized keyboard area against space to display the entered text and the concern over \"fat fingers\". This paper investigates tap accuracy and revisits layered interfaces to explore a novel layered text entry method. A two part user study identifies preferred typing and reading tilt angles and then investigates variants of a tilting layered keyboard against a standard layout. We show good typing speed (29 wpm) and very high accuracy on the standard layout - contradicting fears of fat-fingers limiting watch text-entry. User feedback is positive towards tilting interaction and we identify ∼14° tilt as a comfortable typing angle. However, layering resulted in slightly slower and more erroneous entry. The paper contributes new data on tilt angles and key offsets for smartwatch text entry and supporting evidence for the suitability of QWERTY on smartwatches.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130361676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote usability evaluation enables the possibility of analysing users' behaviour in their daily settings. We present a method and an associated tool able to identify potential usability issues through the analysis of client-side logs of mobile Web interactions. Such log analysis is based on the identification of specific usability smells. We describe an example set of bad usability smells, and how they are detected. The tool also allows evaluators to add new usability smells not included in the original set. We also report on the tool use in analysing the usability of a real, widely used application accessed by forty people through their smartphones whenever and wherever they wanted.
{"title":"Customizable automatic detection of bad usability smells in mobile accessed web applications","authors":"F. Paternò, Antonio Giovanni Schiavone, A. Conte","doi":"10.1145/3098279.3098558","DOIUrl":"https://doi.org/10.1145/3098279.3098558","url":null,"abstract":"Remote usability evaluation enables the possibility of analysing users' behaviour in their daily settings. We present a method and an associated tool able to identify potential usability issues through the analysis of client-side logs of mobile Web interactions. Such log analysis is based on the identification of specific usability smells. We describe an example set of bad usability smells, and how they are detected. The tool also allows evaluators to add new usability smells not included in the original set. We also report on the tool use in analysing the usability of a real, widely used application accessed by forty people through their smartphones whenever and wherever they wanted.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miriam Greis, Tilman Dingler, A. Schmidt, C. Schmandt
People use more and more applications and devices that quantify daily behavior such as the step count or phone usage. Purely presenting the collected data does not necessarily support users in understanding their behavior. In recent research, concepts such as learning by reflection are proposed to foster behavior change based on personal data. In this paper, we introduce user-made predictions to help users understand personal behavior patterns. Therefore, we developed an Android application that tracks users' screen-on and unlock patterns on their phone. The application asks users to predict their daily behavior based on their former usage data. In a user study with 12 participants, we showed the feasibility of leveraging user-made predictions in a quantified self approach. By trying to improve their predictions over the course of the study, participants automatically discovered new insights into personal behavior patterns.
{"title":"Leveraging user-made predictions to help understand personal behavior patterns","authors":"Miriam Greis, Tilman Dingler, A. Schmidt, C. Schmandt","doi":"10.1145/3098279.3122147","DOIUrl":"https://doi.org/10.1145/3098279.3122147","url":null,"abstract":"People use more and more applications and devices that quantify daily behavior such as the step count or phone usage. Purely presenting the collected data does not necessarily support users in understanding their behavior. In recent research, concepts such as learning by reflection are proposed to foster behavior change based on personal data. In this paper, we introduce user-made predictions to help users understand personal behavior patterns. Therefore, we developed an Android application that tracks users' screen-on and unlock patterns on their phone. The application asks users to predict their daily behavior based on their former usage data. In a user study with 12 participants, we showed the feasibility of leveraging user-made predictions in a quantified self approach. By trying to improve their predictions over the course of the study, participants automatically discovered new insights into personal behavior patterns.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121562804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Pinder, Jo Vermeulen, Benjamin R. Cowan, R. Beale, R. Hendley
Subliminal priming has the potential to influence people's attitudes and behaviour, making them prefer certain choices over others. Yet little research has explored its feasibility on smartphones, even though the global popularity and increasing use of smartphones has spurred interest in mobile behaviour change interventions. This paper addresses technical, ethical and design issues in delivering mobile subliminal priming. We present three explorations of the technique: a technical feasibility study, and two participant studies. A pilot study (n=34) explored subliminal goal priming in-the-wild over 1 week, while a semi-controlled study (n=101) explored the immediate effect of subliminal priming on 3 different types of stimuli. We found that although subliminal priming is technically possible on smartphones, there is limited evidence of impact on changes in how much stimuli are preferred by users, with inconsistent effects across stimuli types. We discuss the implications of our results and directions for future research.
{"title":"Exploring the feasibility of subliminal priming on smartphones","authors":"C. Pinder, Jo Vermeulen, Benjamin R. Cowan, R. Beale, R. Hendley","doi":"10.1145/3098279.3098531","DOIUrl":"https://doi.org/10.1145/3098279.3098531","url":null,"abstract":"Subliminal priming has the potential to influence people's attitudes and behaviour, making them prefer certain choices over others. Yet little research has explored its feasibility on smartphones, even though the global popularity and increasing use of smartphones has spurred interest in mobile behaviour change interventions. This paper addresses technical, ethical and design issues in delivering mobile subliminal priming. We present three explorations of the technique: a technical feasibility study, and two participant studies. A pilot study (n=34) explored subliminal goal priming in-the-wild over 1 week, while a semi-controlled study (n=101) explored the immediate effect of subliminal priming on 3 different types of stimuli. We found that although subliminal priming is technically possible on smartphones, there is limited evidence of impact on changes in how much stimuli are preferred by users, with inconsistent effects across stimuli types. We discuss the implications of our results and directions for future research.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114872713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last decade, a body of research investigated enriching touch actions by using finger orientation as an additional input. Beyond new interaction techniques, we envision new user interface elements to make use of the additional input information. We define the fingers orientation by the pitch, roll, and yaw on the touch surface. Determining the finger orientation is not possible using current state-of-the-art devices. As a first step, we built a system that can determine the finger orientation. We developed a working prototype with a depth camera mounted on a tablet. We conducted a study with 12 participants to record ground truth data for the index, middle, ring and little finger to evaluate the accuracy of our prototype using the PointPose [3] algorithm to estimate the pitch and yaw of the finger. By applying 2D linear correction models, we further show a reduction of RMSE by 45.4% for pitch and 21.83% for yaw.
{"title":"Feasibility analysis of detecting the finger orientation with depth cameras","authors":"Sven Mayer, Michael Mayer, N. Henze","doi":"10.1145/3098279.3122125","DOIUrl":"https://doi.org/10.1145/3098279.3122125","url":null,"abstract":"Over the last decade, a body of research investigated enriching touch actions by using finger orientation as an additional input. Beyond new interaction techniques, we envision new user interface elements to make use of the additional input information. We define the fingers orientation by the pitch, roll, and yaw on the touch surface. Determining the finger orientation is not possible using current state-of-the-art devices. As a first step, we built a system that can determine the finger orientation. We developed a working prototype with a depth camera mounted on a tablet. We conducted a study with 12 participants to record ground truth data for the index, middle, ring and little finger to evaluate the accuracy of our prototype using the PointPose [3] algorithm to estimate the pitch and yaw of the finger. By applying 2D linear correction models, we further show a reduction of RMSE by 45.4% for pitch and 21.83% for yaw.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127923342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose the concept of a guiding system specifically designed for semaphoric gaze gestures, i.e. gestures defining a vocabulary to trigger commands via the gaze modality. Our design exploration considers fundamental gaze gesture phases: Exploration, Guidance, and Return. A first experiment reveals that Guidance with dynamic elements moving along 2D paths is efficient and resistant to visual complexity. A second experiment reveals that a Rapid Serial Visual Presentation of command names during Exploration allows for more than 30% faster command retrievals than a standard visual search. To resume the task where the guide was triggered, labels moving from the outward extremity of 2D paths toward the guide center leads to efficient and accurate origin retrieval during the Return phase. We evaluate our resulting Gaze Gesture Guiding system, G3, for interacting with distant objects in an office environment using a head-mounted display. Users report positively on their experience with both semaphoric gaze gestures and G3.
{"title":"Designing a gaze gesture guiding system","authors":"W. Delamare, Teng Han, Pourang Irani","doi":"10.1145/3098279.3098561","DOIUrl":"https://doi.org/10.1145/3098279.3098561","url":null,"abstract":"We propose the concept of a guiding system specifically designed for semaphoric gaze gestures, i.e. gestures defining a vocabulary to trigger commands via the gaze modality. Our design exploration considers fundamental gaze gesture phases: Exploration, Guidance, and Return. A first experiment reveals that Guidance with dynamic elements moving along 2D paths is efficient and resistant to visual complexity. A second experiment reveals that a Rapid Serial Visual Presentation of command names during Exploration allows for more than 30% faster command retrievals than a standard visual search. To resume the task where the guide was triggered, labels moving from the outward extremity of 2D paths toward the guide center leads to efficient and accurate origin retrieval during the Return phase. We evaluate our resulting Gaze Gesture Guiding system, G3, for interacting with distant objects in an office environment using a head-mounted display. Users report positively on their experience with both semaphoric gaze gestures and G3.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115679765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A lack of intrinsic motivation to practice speech is attributed to tedious and repetitive speech curriculums, but mobile games have been widely recognized as a valid motivator for jaded individuals. SpokeIt is an interactive storybook style speech therapy game that intends to turn practicing speech into a motivating and productive experience for individuals with speech impairments as well as provide speech therapists an important diagnostic tool. In this paper, I discuss the novel intellectual contributions SpokeIt can provide such as an offline critical conversational speech recognition system, and the application of therapy curriculums to mobile platforms, I present conducted research, and consider exciting future work and research directions.
{"title":"A mobile game system for improving the speech therapy experience","authors":"Jared Duval","doi":"10.1145/3098279.3119925","DOIUrl":"https://doi.org/10.1145/3098279.3119925","url":null,"abstract":"A lack of intrinsic motivation to practice speech is attributed to tedious and repetitive speech curriculums, but mobile games have been widely recognized as a valid motivator for jaded individuals. SpokeIt is an interactive storybook style speech therapy game that intends to turn practicing speech into a motivating and productive experience for individuals with speech impairments as well as provide speech therapists an important diagnostic tool. In this paper, I discuss the novel intellectual contributions SpokeIt can provide such as an offline critical conversational speech recognition system, and the application of therapy curriculums to mobile platforms, I present conducted research, and consider exciting future work and research directions.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115714121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present EXHI-bit, a mechanical structure for prototyping unique shape-changing interfaces that can be easily built in a fabrication laboratory. EXHI-bit surfaces consist of inter-weaving units that slide in two dimensions. This assembly enables the creation of unique expandable handheld surfaces with continuous transitions while maintaining the surface flat, rigid, and non-porous. EXHI-bit surfaces can be combined to create 2D and 3D multi-surface objects. In this paper, we demonstrate the versatility and generality of EXHI-bit with user-deformed and self-actuated 1D, 2D, and 3D prototypes employed in an architectural urban planning scenario. We also present vision on the use of expandable tablets in our everyday life from 10 users after having interacted with an EXHI-bit tablet.
{"title":"EXHI-bit: a mechanical structure for prototyping EXpandable handheld interfaces","authors":"Michaël Ortega, Jérôme Maisonnasse, L. Nigay","doi":"10.1145/3098279.3098533","DOIUrl":"https://doi.org/10.1145/3098279.3098533","url":null,"abstract":"We present EXHI-bit, a mechanical structure for prototyping unique shape-changing interfaces that can be easily built in a fabrication laboratory. EXHI-bit surfaces consist of inter-weaving units that slide in two dimensions. This assembly enables the creation of unique expandable handheld surfaces with continuous transitions while maintaining the surface flat, rigid, and non-porous. EXHI-bit surfaces can be combined to create 2D and 3D multi-surface objects. In this paper, we demonstrate the versatility and generality of EXHI-bit with user-deformed and self-actuated 1D, 2D, and 3D prototypes employed in an architectural urban planning scenario. We also present vision on the use of expandable tablets in our everyday life from 10 users after having interacted with an EXHI-bit tablet.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123705192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}