As everyday companions, smartphones are well-suited tools for controlling interactive applications on large public displays. To allow concurrent interaction by multiple users beyond traditional collaborative scenarios we introduce the idea of virtually augmented public screens for creating personalized views and thus literally enabling "private public screens". We present a fully functional research prototype in form of a Video Wall application and report on first experiences gathered from a comparative user study. The results show that the proposed personalized Augmented Reality approach, which allows each user to have a private view on the public display, is preferred over a purely competitive mode, where the public display is shared between the users. Further, our study shows that social activity indicators informing about the activities of other users are well appreciated.
{"title":"Private public screens: detached multi-user interaction with large displays through mobile augmented reality","authors":"Matthias Baldauf, Katrin Lasinger, Peter Fröhlich","doi":"10.1145/2406367.2406401","DOIUrl":"https://doi.org/10.1145/2406367.2406401","url":null,"abstract":"As everyday companions, smartphones are well-suited tools for controlling interactive applications on large public displays. To allow concurrent interaction by multiple users beyond traditional collaborative scenarios we introduce the idea of virtually augmented public screens for creating personalized views and thus literally enabling \"private public screens\". We present a fully functional research prototype in form of a Video Wall application and report on first experiences gathered from a comparative user study. The results show that the proposed personalized Augmented Reality approach, which allows each user to have a private view on the public display, is preferred over a purely competitive mode, where the public display is shared between the users. Further, our study shows that social activity indicators informing about the activities of other users are well appreciated.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127097634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social Devices are smart phones that interact with each other in order to proactively trigger interaction between co-located users. Social Devices promote proxemic interactions between people and can act as enablers of users' social contacts. Various modalities are used to support natural interactions between users and Social Devices. We present an explorative study of spatial gestures that could support interactions with Social Devices. Our aim was to find out what kind of gestures users would like to use for different types of actions. Ten pairs participated in a laboratory study in which they went through three scenarios of Social Devices. The participants were asked to generate gestures that they would find suitable for user actions within the scenarios. In this paper, we present the found gesture types and their fit to various user actions. Our results show that Scan, Swing, Nod and Turn the screen down are potential spatial gestures for intuitive use of Social Devices. User feedback about their preferences of using spatial gestures indicates issues of social acceptance.
{"title":"An exploratory study of user-generated spatial gestures with social mobile devices","authors":"Kaisa Väänänen, Thomas Olsson, Jari Laaksonen","doi":"10.1145/2406367.2406381","DOIUrl":"https://doi.org/10.1145/2406367.2406381","url":null,"abstract":"Social Devices are smart phones that interact with each other in order to proactively trigger interaction between co-located users. Social Devices promote proxemic interactions between people and can act as enablers of users' social contacts. Various modalities are used to support natural interactions between users and Social Devices. We present an explorative study of spatial gestures that could support interactions with Social Devices. Our aim was to find out what kind of gestures users would like to use for different types of actions. Ten pairs participated in a laboratory study in which they went through three scenarios of Social Devices. The participants were asked to generate gestures that they would find suitable for user actions within the scenarios. In this paper, we present the found gesture types and their fit to various user actions. Our results show that Scan, Swing, Nod and Turn the screen down are potential spatial gestures for intuitive use of Social Devices. User feedback about their preferences of using spatial gestures indicates issues of social acceptance.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114298053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nemanja Memarovic, Marc Langheinrich, Elisa Rubegni, Andreia David, Ivan Elhart
In the age of online social networks, local communities still play an essential role in supporting social cohesion. In this paper we present a study that explores the design of "interacting places" -- networked public multimedia services that foster community awareness between local members -- in the context of a student community. In order to have interacting places "fit in" with the existing communication practices of the students, we performed and analyzed a set of semi-structured interviews with n=17 students regarding their use of email, social networking services, and instant messaging to stay in touch with others. A follow-up online survey (n=76) then explored how networked public multimedia services could complement these practices. Following a "communicative ecology" approach -- a conceptual model that represents the technical, social, and discursive contexts of communication -- we draw up guidelines to support the design of both content and channels (applications) for interacting places in student communities.
{"title":"Designing \"interacting places\" for a student community using a communicative ecology approach","authors":"Nemanja Memarovic, Marc Langheinrich, Elisa Rubegni, Andreia David, Ivan Elhart","doi":"10.1145/2406367.2406420","DOIUrl":"https://doi.org/10.1145/2406367.2406420","url":null,"abstract":"In the age of online social networks, local communities still play an essential role in supporting social cohesion. In this paper we present a study that explores the design of \"interacting places\" -- networked public multimedia services that foster community awareness between local members -- in the context of a student community. In order to have interacting places \"fit in\" with the existing communication practices of the students, we performed and analyzed a set of semi-structured interviews with n=17 students regarding their use of email, social networking services, and instant messaging to stay in touch with others. A follow-up online survey (n=76) then explored how networked public multimedia services could complement these practices. Following a \"communicative ecology\" approach -- a conceptual model that represents the technical, social, and discursive contexts of communication -- we draw up guidelines to support the design of both content and channels (applications) for interacting places in student communities.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"461 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116177472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takuya Katayama, T. Terada, Kazuya Murao, M. Tsukamoto
In various environments, such as mobile and wearable computing, compact I/O devices are desirable from the viewpoint of portability. Now, many users are accustomed to input with a keyboard, however, there is a limitation of miniaturization because it degrades the performance of key touch. Therefore, in this paper, we propose a method to miniaturize a keyboard by excluding the half of it. In using the proposed method, one hand hits keys as usual, and the other hand hits the place outside the keyboard as if the user types with both hands. The user can input words with only one hand because the proposed system estimates the input word using keying interval, which appears also when the user inputs with both hands. From the results of user study, we confirmed that the user can input with only one hand and that it does not decrease input speed drastically.
{"title":"A text input method for half-sized keyboard using keying interval","authors":"Takuya Katayama, T. Terada, Kazuya Murao, M. Tsukamoto","doi":"10.1145/2406367.2406375","DOIUrl":"https://doi.org/10.1145/2406367.2406375","url":null,"abstract":"In various environments, such as mobile and wearable computing, compact I/O devices are desirable from the viewpoint of portability. Now, many users are accustomed to input with a keyboard, however, there is a limitation of miniaturization because it degrades the performance of key touch. Therefore, in this paper, we propose a method to miniaturize a keyboard by excluding the half of it. In using the proposed method, one hand hits keys as usual, and the other hand hits the place outside the keyboard as if the user types with both hands. The user can input words with only one hand because the proposed system estimates the input word using keying interval, which appears also when the user inputs with both hands. From the results of user study, we confirmed that the user can input with only one hand and that it does not decrease input speed drastically.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121706441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Da-Yuan Huang, Chien-Pang Lin, Y. Hung, Tzu-Wen Chang, Neng-Hao Yu, Min-Lun Tsai, Mike Y. Chen
Most mobile games are designed for users to only focus on their own screens thus lack of face-to-face interaction even users are sitting together. Prior work shows that the shared information space created by multiple mobile devices can encourage users to communicate to each other naturally. The aim of this work is to provide a fluent view-stitching technique for mobile phone users to establish their information-shared view. We present MagMobile: a new spatial interaction technique that allows users to stitch views by simply putting multiple mobile devices close to each other. We describe the design of spatial-aware sensor module which is low cost and easy to be obtained into phones. We also propose two collaborative games to engage social interactions in the co-located place.
{"title":"MagMobile: enhancing social interactions with rapid view-stitching games of mobile devices","authors":"Da-Yuan Huang, Chien-Pang Lin, Y. Hung, Tzu-Wen Chang, Neng-Hao Yu, Min-Lun Tsai, Mike Y. Chen","doi":"10.1145/2406367.2406440","DOIUrl":"https://doi.org/10.1145/2406367.2406440","url":null,"abstract":"Most mobile games are designed for users to only focus on their own screens thus lack of face-to-face interaction even users are sitting together. Prior work shows that the shared information space created by multiple mobile devices can encourage users to communicate to each other naturally. The aim of this work is to provide a fluent view-stitching technique for mobile phone users to establish their information-shared view. We present MagMobile: a new spatial interaction technique that allows users to stitch views by simply putting multiple mobile devices close to each other. We describe the design of spatial-aware sensor module which is low cost and easy to be obtained into phones. We also propose two collaborative games to engage social interactions in the co-located place.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114746298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Table display surfaces, like Microsoft PixelSense, can display multimedia content to a group of users simultaneously, but it is expensive and lacks mobility. On the contrary, mobile devices are more easily available, but due to limited screen size and resolution, they are not suitable for sharing multimedia data interactively. In this paper we present a "Dynamic Tiling Display", an interactive display surface built from mobile devices. Our framework utilizes the integrated front facing camera of mobile devices to estimate the relative pose of multiple mobile screens arbitrarily placed on a table. Using this framework, users can create a large virtual display where multiple users can explore multimedia data interactively through separate windows (mobile screens). The major technical challenge is the calibration of individual displays, which is solved by visual object recognition using front facing camera inputs.
{"title":"Dynamic tiling display: building an interactive display surface using multiple mobile devices","authors":"Ming Li, L. Kobbelt","doi":"10.1145/2406367.2406397","DOIUrl":"https://doi.org/10.1145/2406367.2406397","url":null,"abstract":"Table display surfaces, like Microsoft PixelSense, can display multimedia content to a group of users simultaneously, but it is expensive and lacks mobility. On the contrary, mobile devices are more easily available, but due to limited screen size and resolution, they are not suitable for sharing multimedia data interactively. In this paper we present a \"Dynamic Tiling Display\", an interactive display surface built from mobile devices. Our framework utilizes the integrated front facing camera of mobile devices to estimate the relative pose of multiple mobile screens arbitrarily placed on a table. Using this framework, users can create a large virtual display where multiple users can explore multimedia data interactively through separate windows (mobile screens). The major technical challenge is the calibration of individual displays, which is solved by visual object recognition using front facing camera inputs.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131836165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Möller, M. Kranz, Robert Huitl, Stefan Diewald, L. Roalter
Vision-based approaches for mobile indoor localization do not rely on the infrastructure and are therefore scalable and cheap. The particular requirements to a navigation user interface for a vision-based system, however, have not been investigated so far. Such mobile interfaces should adapt to localization accuracy, which strongly relies on distinctive reference images, and other factors, such as the phone's pose. If necessary, the system should motivate the user to point at distinctive regions with the smartphone to improve localization quality. We present a combined interface of Virtual Reality (VR) and Augmented Reality (AR) elements with indicators that help to communicate and ensure localization accuracy. In an evaluation with 81 participants, we found that AR was preferred in case of reliable localization, but with VR, navigation instructions were perceived more accurate in case of localization and orientation errors. The additional indicators showed a potential for making users choose distinctive reference images for reliable localization.
{"title":"A mobile indoor navigation system interface adapted to vision-based localization","authors":"Andreas Möller, M. Kranz, Robert Huitl, Stefan Diewald, L. Roalter","doi":"10.1145/2406367.2406372","DOIUrl":"https://doi.org/10.1145/2406367.2406372","url":null,"abstract":"Vision-based approaches for mobile indoor localization do not rely on the infrastructure and are therefore scalable and cheap. The particular requirements to a navigation user interface for a vision-based system, however, have not been investigated so far. Such mobile interfaces should adapt to localization accuracy, which strongly relies on distinctive reference images, and other factors, such as the phone's pose. If necessary, the system should motivate the user to point at distinctive regions with the smartphone to improve localization quality. We present a combined interface of Virtual Reality (VR) and Augmented Reality (AR) elements with indicators that help to communicate and ensure localization accuracy. In an evaluation with 81 participants, we found that AR was preferred in case of reliable localization, but with VR, navigation instructions were perceived more accurate in case of localization and orientation errors. The additional indicators showed a potential for making users choose distinctive reference images for reliable localization.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115712515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Ojala, Fabio Kruger, V. Kostakos, Ville Valkama
We report two one-month-long field trials where Bluetooth access points deployed around Oulu, Finland, were employed to attempt to push unsolicited multimedia marketing messages to bypassing mobile devices that had their Bluetooth on and visible. The logs involving ~65000 unique discovered devices of real users show that only 0.12% of the ~650000 transmission attempts were successful. On average, 1.1% of the devices received the message and 3.3% of the owners of these devices signed up to the marketing campaign. These statistics characterize the efficiency of 'carpet bombing' type of proximity marketing realized with the current Bluetooth technology without any support mechanisms.
{"title":"Two field trials on the efficiency of unsolicited Bluetooth proximity marketing","authors":"T. Ojala, Fabio Kruger, V. Kostakos, Ville Valkama","doi":"10.1145/2406367.2406414","DOIUrl":"https://doi.org/10.1145/2406367.2406414","url":null,"abstract":"We report two one-month-long field trials where Bluetooth access points deployed around Oulu, Finland, were employed to attempt to push unsolicited multimedia marketing messages to bypassing mobile devices that had their Bluetooth on and visible. The logs involving ~65000 unique discovered devices of real users show that only 0.12% of the ~650000 transmission attempts were successful. On average, 1.1% of the devices received the message and 3.3% of the owners of these devices signed up to the marketing campaign. These statistics characterize the efficiency of 'carpet bombing' type of proximity marketing realized with the current Bluetooth technology without any support mechanisms.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114483778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minna Karukka, Pekka Nisula, Jonna Häkkilä, Jussi Kangasoja
In this paper we present our work on exploring the possibilities of using 3D projection, instead of 2D displays, for a media installation set-up, which we aim to use for exhibition and teaching purposes. We compare two visual media installations presenting a rotating Earth, one presented on a 2D display and the other projected on a physical 3D object, and present the feedback collected by using the product evaluation cards method.
{"title":"Charting the audience perceptions of projected 3D media installations","authors":"Minna Karukka, Pekka Nisula, Jonna Häkkilä, Jussi Kangasoja","doi":"10.1145/2406367.2406426","DOIUrl":"https://doi.org/10.1145/2406367.2406426","url":null,"abstract":"In this paper we present our work on exploring the possibilities of using 3D projection, instead of 2D displays, for a media installation set-up, which we aim to use for exhibition and teaching purposes. We compare two visual media installations presenting a rotating Earth, one presented on a 2D display and the other projected on a physical 3D object, and present the feedback collected by using the product evaluation cards method.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"617 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126161481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
João M. F. Silva, Diogo Cabral, Carla Fernandes, N. Correia
When using a tablet computer, sketching is a natural approach for users to annotate video scenes. However, when these annotations are done in real-time and overlaid in the video, their context can be lost due to changes in the scene being annotated. We propose an approach towards maintaining the annotations' context, by using object tracking to create anchors onto which further annotations can be attached. To this end, the annotator is capable of using different tracking methods, including a Kinect sensor and/or the TLD object tracking algorithm. The challenges involved in designing an interface to support the association of video annotations with tracked objects in real-time are also discussed. In particular, we discuss our alternative approaches to handle moving object selection on live video, which we have called "Hold and Overlay" and "Hold and Speed Up". In addition, the results of a set of preliminary tests are reported.
{"title":"Real-time annotation of video objects on tablet computers","authors":"João M. F. Silva, Diogo Cabral, Carla Fernandes, N. Correia","doi":"10.1145/2406367.2406391","DOIUrl":"https://doi.org/10.1145/2406367.2406391","url":null,"abstract":"When using a tablet computer, sketching is a natural approach for users to annotate video scenes. However, when these annotations are done in real-time and overlaid in the video, their context can be lost due to changes in the scene being annotated. We propose an approach towards maintaining the annotations' context, by using object tracking to create anchors onto which further annotations can be attached. To this end, the annotator is capable of using different tracking methods, including a Kinect sensor and/or the TLD object tracking algorithm. The challenges involved in designing an interface to support the association of video annotations with tracked objects in real-time are also discussed. In particular, we discuss our alternative approaches to handle moving object selection on live video, which we have called \"Hold and Overlay\" and \"Hold and Speed Up\". In addition, the results of a set of preliminary tests are reported.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121182060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}