R. Dinic, Michael Domhardt, Simon W. Ginzinger, Thomas Stütz
The accurate assessment of nutrition information is a challenging task, but crucial for people with certain diseases, such as diabetes. An important part of the assessment of nutrition information is portion estimation, i.e. volume estimation. Given the volume and the food type, the nutrition information can be computed on the basis of the food type specific nutrition density. Recently mobile devices with depth sensors have been made available for the public (Google's project tango platform). In this work, an app for mobile devices with a depth sensor is presented which assists users in portion estimation. Furthermore, we present the design of a user study for the app and preliminary results.
{"title":"EatAR tango: portion estimation on mobile devices with a depth sensor","authors":"R. Dinic, Michael Domhardt, Simon W. Ginzinger, Thomas Stütz","doi":"10.1145/3098279.3125434","DOIUrl":"https://doi.org/10.1145/3098279.3125434","url":null,"abstract":"The accurate assessment of nutrition information is a challenging task, but crucial for people with certain diseases, such as diabetes. An important part of the assessment of nutrition information is portion estimation, i.e. volume estimation. Given the volume and the food type, the nutrition information can be computed on the basis of the food type specific nutrition density. Recently mobile devices with depth sensors have been made available for the public (Google's project tango platform). In this work, an app for mobile devices with a depth sensor is presented which assists users in portion estimation. Furthermore, we present the design of a user study for the app and preliminary results.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128923609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kerstin Blumenstein, C. Niederer, Markus Wagner, Wilhelm Pfersmann, Markus Seidl, W. Aigner
Mobile devices are more and more used in parallel, esp. in the field of TV viewing as second screen devices. Such scenarios aim to enhance the viewers' user experience while watching TV. We designed and implemented a second screen prototype intended to be used in parallel to watching a TV documentary. It allows to interactively explore a combination of spatial and time-oriented data to extend and enrich the TV content. We evaluated our proto-type in a twofold approach, consisting of expert reviews and user evaluation. We identified different interaction habits in a second screen scenario and present its benefits in relation to documentaries.
{"title":"Visualizing spatial and time-oriented data in a second screen application","authors":"Kerstin Blumenstein, C. Niederer, Markus Wagner, Wilhelm Pfersmann, Markus Seidl, W. Aigner","doi":"10.1145/3098279.3122127","DOIUrl":"https://doi.org/10.1145/3098279.3122127","url":null,"abstract":"Mobile devices are more and more used in parallel, esp. in the field of TV viewing as second screen devices. Such scenarios aim to enhance the viewers' user experience while watching TV. We designed and implemented a second screen prototype intended to be used in parallel to watching a TV documentary. It allows to interactively explore a combination of spatial and time-oriented data to extend and enrich the TV content. We evaluated our proto-type in a twofold approach, consisting of expert reviews and user evaluation. We identified different interaction habits in a second screen scenario and present its benefits in relation to documentaries.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127646896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Geronimo, M. Bertarini, Julia Badertscher, Maria Husmann, M. Norrie
The number of smart devices that people own or share with family and friends has increased dramatically. As a result, users often want to copy data among devices such as smart-phones, tablets and desktop computers. While various chat and cloud services support the sharing of data, they require users to interrupt their workflow to copy resources. We present MyoShare, a system that allows content to be shared among devices using mid-air gestures that can be used at any time, independent of the current task and location of devices. We report on an elicitation study where participants designed a set of gestures for sharing content. In a second user study, we compared mid-air gestures with alternative interaction modes using keyboard or touch Shortcuts, Speech, and Menu Selection. We discuss the results of the study in terms of both the strengths and weaknesses of mid-air gestures, along with suggestions for future work.
{"title":"Exploiting mid-air gestures to share data among devices","authors":"L. Geronimo, M. Bertarini, Julia Badertscher, Maria Husmann, M. Norrie","doi":"10.1145/3098279.3098530","DOIUrl":"https://doi.org/10.1145/3098279.3098530","url":null,"abstract":"The number of smart devices that people own or share with family and friends has increased dramatically. As a result, users often want to copy data among devices such as smart-phones, tablets and desktop computers. While various chat and cloud services support the sharing of data, they require users to interrupt their workflow to copy resources. We present MyoShare, a system that allows content to be shared among devices using mid-air gestures that can be used at any time, independent of the current task and location of devices. We report on an elicitation study where participants designed a set of gestures for sharing content. In a second user study, we compared mid-air gestures with alternative interaction modes using keyboard or touch Shortcuts, Speech, and Menu Selection. We discuss the results of the study in terms of both the strengths and weaknesses of mid-air gestures, along with suggestions for future work.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124489144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile systems provide many means to relay information to a distant partner, but remote communication is still limited compared to face-to-face interaction. Deictic communication and pointing, in particular, are challenging when two parties communicate across distances. In this paper, we investigate how people envision remote pointing would work when using mobile devices. We report on an elicitation study where we asked participants to perform a series of remote pointing tasks. Our results provide initial insights into user behaviors and specific issues in this context. We discovered that most people follow one of two basic patterns, that their individual pointing behavior is very consistent and that the shape and location of the target object have little influence on the pointing gesture used. From our results, we derived a set of design guidelines for future user interfaces for remote pointing. Our contributions can benefit designers and researchers of such interfaces.
{"title":"Enabling remote deictic communication with mobile devices: an elicitation study","authors":"Samuel Navas Medrano, Max Pfeiffer, C. Kray","doi":"10.1145/3098279.3098544","DOIUrl":"https://doi.org/10.1145/3098279.3098544","url":null,"abstract":"Mobile systems provide many means to relay information to a distant partner, but remote communication is still limited compared to face-to-face interaction. Deictic communication and pointing, in particular, are challenging when two parties communicate across distances. In this paper, we investigate how people envision remote pointing would work when using mobile devices. We report on an elicitation study where we asked participants to perform a series of remote pointing tasks. Our results provide initial insights into user behaviors and specific issues in this context. We discovered that most people follow one of two basic patterns, that their individual pointing behavior is very consistent and that the shape and location of the target object have little influence on the pointing gesture used. From our results, we derived a set of design guidelines for future user interfaces for remote pointing. Our contributions can benefit designers and researchers of such interfaces.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120882824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sven Bertel, T. Dressel, Tom Kohlberg, Vanessa von Jan
We investigated the spatial knowledge that users of pedestrian navigation support acquire about the navigated area. In particular, we compare two conditions: A spatially richer condition, which provides continual access to information about route directions and surroundings via a local map at closest zoom level. And a spatially sparser condition, in which route directions are given via a tactile display and only as decision points come up. In a field study, 28 participants navigated on foot through a previously unfamiliar urban area. Data on resulting spatial knowledge, gaze distribution on environmental features, performance, individual spatial abilities, and user experience were collected and analysed. We were specifically interested in the route and survey knowledge that participants had acquired. The results point to advantages for acquiring route knowledge through using the sparser, tactile display condition and for acquiring survey knowledge through using the richer map condition. We conclude with discussing ramifications for the design and use of different types of pedestrian navigation support systems for different task scenarios.
{"title":"Spatial knowledge acquired from pedestrian urban navigation systems","authors":"Sven Bertel, T. Dressel, Tom Kohlberg, Vanessa von Jan","doi":"10.1145/3098279.3098543","DOIUrl":"https://doi.org/10.1145/3098279.3098543","url":null,"abstract":"We investigated the spatial knowledge that users of pedestrian navigation support acquire about the navigated area. In particular, we compare two conditions: A spatially richer condition, which provides continual access to information about route directions and surroundings via a local map at closest zoom level. And a spatially sparser condition, in which route directions are given via a tactile display and only as decision points come up. In a field study, 28 participants navigated on foot through a previously unfamiliar urban area. Data on resulting spatial knowledge, gaze distribution on environmental features, performance, individual spatial abilities, and user experience were collected and analysed. We were specifically interested in the route and survey knowledge that participants had acquired. The results point to advantages for acquiring route knowledge through using the sparser, tactile display condition and for acquiring survey knowledge through using the richer map condition. We conclude with discussing ramifications for the design and use of different types of pedestrian navigation support systems for different task scenarios.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116255504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
My research explores the design and development of deformable controls and shape displays for mobile devices. My significant work so far has been set in the context of tools for creating digital art. I have been studying how we might design and develop deformable interfaces that support interaction with digital art applications. Along with how to bring elements of realistic painting to the digital experience through physical mobile controls. Part of this is designing and prototyping physical hardware interfaces and then evaluating them using HCI research methods in user studies and interviews. This extended abstract outlines my research aims, progress so far, and my future work and direction.
{"title":"Designing mobile deformable controls for creation of digital art","authors":"Cameron Steer","doi":"10.1145/3098279.3119923","DOIUrl":"https://doi.org/10.1145/3098279.3119923","url":null,"abstract":"My research explores the design and development of deformable controls and shape displays for mobile devices. My significant work so far has been set in the context of tools for creating digital art. I have been studying how we might design and develop deformable interfaces that support interaction with digital art applications. Along with how to bring elements of realistic painting to the digital experience through physical mobile controls. Part of this is designing and prototyping physical hardware interfaces and then evaluating them using HCI research methods in user studies and interviews. This extended abstract outlines my research aims, progress so far, and my future work and direction.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116312277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederic Kerber, T. Kiefer, Markus Löchtefeld, A. Krüger
The small display size of smartwatches creates a challenge for touch input, which is still the interaction technique of choice. Researchers and producers have started to investigate alternative interaction techniques. Apple and Samsung, for example, introduced digital versions of classic watch components such as the digital crown and the rotatable bezel. However, it remains an open question how well these components behave in terms of user interaction. Based on a self-built smartwatch prototype, we compare current interaction paradigms (touch input, rotatable bezel and digital crown) for one-dimensional tasks, i.e. scrolling in a list, two-dimensional tasks, i.e. navigation on a digital map, and a complex navigation/zoom task. To check for ecological validity of our results, we conducted an additional study focusing on interaction with currently available off-the-shelf devices using our considered interaction paradigms. Following our results, we present guidelines on which interaction techniques to use for the respective tasks.
{"title":"Investigating current techniques for opposite-hand smartwatch interaction","authors":"Frederic Kerber, T. Kiefer, Markus Löchtefeld, A. Krüger","doi":"10.1145/3098279.3098542","DOIUrl":"https://doi.org/10.1145/3098279.3098542","url":null,"abstract":"The small display size of smartwatches creates a challenge for touch input, which is still the interaction technique of choice. Researchers and producers have started to investigate alternative interaction techniques. Apple and Samsung, for example, introduced digital versions of classic watch components such as the digital crown and the rotatable bezel. However, it remains an open question how well these components behave in terms of user interaction. Based on a self-built smartwatch prototype, we compare current interaction paradigms (touch input, rotatable bezel and digital crown) for one-dimensional tasks, i.e. scrolling in a list, two-dimensional tasks, i.e. navigation on a digital map, and a complex navigation/zoom task. To check for ecological validity of our results, we conducted an additional study focusing on interaction with currently available off-the-shelf devices using our considered interaction paradigms. Following our results, we present guidelines on which interaction techniques to use for the respective tasks.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126118254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uwe Gruenefeld, Abdallah El Ali, Wilko Heuten, Susanne CJ Boll
Various off-screen visualization techniques that point to off-screen objects have been developed for small screen devices. A similar problem arises with head-mounted Augmented Reality (AR) with respect to the human field-of-view, where objects may be out of view. Being able to detect so-called out-of-view objects is useful for certain scenarios (e.g., situation monitoring during ship docking). To augment existing AR with this capability, we adapted and tested well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) for head-mounted AR. We found that Halo resulted in the lowest error for direction estimation while Wedge was subjectively perceived as best. We discuss future directions of how to best visualize out-of-view objects in head-mounted AR.
{"title":"Visualizing out-of-view objects in head-mounted augmented reality","authors":"Uwe Gruenefeld, Abdallah El Ali, Wilko Heuten, Susanne CJ Boll","doi":"10.1145/3098279.3122124","DOIUrl":"https://doi.org/10.1145/3098279.3122124","url":null,"abstract":"Various off-screen visualization techniques that point to off-screen objects have been developed for small screen devices. A similar problem arises with head-mounted Augmented Reality (AR) with respect to the human field-of-view, where objects may be out of view. Being able to detect so-called out-of-view objects is useful for certain scenarios (e.g., situation monitoring during ship docking). To augment existing AR with this capability, we adapted and tested well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) for head-mounted AR. We found that Halo resulted in the lowest error for direction estimation while Wedge was subjectively perceived as best. We discuss future directions of how to best visualize out-of-view objects in head-mounted AR.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125890355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tim Weissker, E. Genc, Andreas Berst, F. Schreiber, Florian Echtler
We present ShakeCast, a system for automatic peer-to-peer exchange of contact information between two persons who just shook hands. The accelerometer in a smartwatch is used to detect the physical handshake and implicitly triggers a setup-free information transfer between the users' personal smartphones using Bluetooth LE broadcasts. An abstract representation of the handshake motion data is used to disambiguate between multiple simultaneous transmissions and to prevent accidental data leakage. To evaluate our system, we collected individual wrist acceleration data from 130 handshakes, performed by varying combinations of 20 volunteers. We present a systematic analysis of possible data features which can be used for disambiguation, and we validate our approach using the most salient features. Our analysis shows an expected match rate between corresponding handshakes of 92.3%.
{"title":"ShakeCast: using handshake detection for automated, setup-free exchange of contact data","authors":"Tim Weissker, E. Genc, Andreas Berst, F. Schreiber, Florian Echtler","doi":"10.1145/3098279.3122131","DOIUrl":"https://doi.org/10.1145/3098279.3122131","url":null,"abstract":"We present ShakeCast, a system for automatic peer-to-peer exchange of contact information between two persons who just shook hands. The accelerometer in a smartwatch is used to detect the physical handshake and implicitly triggers a setup-free information transfer between the users' personal smartphones using Bluetooth LE broadcasts. An abstract representation of the handshake motion data is used to disambiguate between multiple simultaneous transmissions and to prevent accidental data leakage. To evaluate our system, we collected individual wrist acceleration data from 130 handshakes, performed by varying combinations of 20 volunteers. We present a systematic analysis of possible data features which can be used for disambiguation, and we validate our approach using the most salient features. Our analysis shows an expected match rate between corresponding handshakes of 92.3%.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128215810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Geronimo, M. Bertarini, Julia Badertscher, Maria Husmann, M. Norrie
With the growth of mobile devices, the necessity for a more ubiquitous network among the smartphones, tablets and computers of users increases. Sending and receiving content among personal devices should be an intuitive rather than a distracting task to be performed. With MyoShare, we propose a system that exploits the use of mid-air gestures, captured via the Myo wearable armband, to share web data among devices without requiring the user to open additional windows or copy-paste content into emails or chat applications. Users can select content from any web page and send it to another device by simply waving their hands. During the demo session, participants of the conference will be able to try out and see our system in action.
{"title":"MyoShare: sharing data among devices via mid-air gestures","authors":"L. Geronimo, M. Bertarini, Julia Badertscher, Maria Husmann, M. Norrie","doi":"10.1145/3098279.3125436","DOIUrl":"https://doi.org/10.1145/3098279.3125436","url":null,"abstract":"With the growth of mobile devices, the necessity for a more ubiquitous network among the smartphones, tablets and computers of users increases. Sending and receiving content among personal devices should be an intuitive rather than a distracting task to be performed. With MyoShare, we propose a system that exploits the use of mid-air gestures, captured via the Myo wearable armband, to share web data among devices without requiring the user to open additional windows or copy-paste content into emails or chat applications. Users can select content from any web page and send it to another device by simply waving their hands. During the demo session, participants of the conference will be able to try out and see our system in action.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130577453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}