Personal well-being is influenced by small daily decisions such as what or when to eat or whether to go jogging. The health consequences of these decisions accumulate over time. Mobile applications can be designed to support people in everyday decisions and thus help to improve well-being real-time. In this paper we propose a framework to study the influential factors for users to download and use applications from the wide selection currently available in App stores. The framework includes attractiveness, value, ease-of-use, trust, social support, diffusiveness, as well as fun and excitement. We illustrated how the framework works in practise by applying it to an online survey to assess 12 mobile Apps for well-being. The results showed that these influential factors did match the decisions on users' attitudes toward taking Apps into use. User feedback explained how people assessed the influential factors before actually using the applications.
{"title":"What influences users' decisions to take apps into use?: a framework for evaluating persuasive and engaging design in mobile Apps for well-being","authors":"Ting-Ray Chang, E. Kaasinen, K. Kaipainen","doi":"10.1145/2406367.2406370","DOIUrl":"https://doi.org/10.1145/2406367.2406370","url":null,"abstract":"Personal well-being is influenced by small daily decisions such as what or when to eat or whether to go jogging. The health consequences of these decisions accumulate over time. Mobile applications can be designed to support people in everyday decisions and thus help to improve well-being real-time. In this paper we propose a framework to study the influential factors for users to download and use applications from the wide selection currently available in App stores. The framework includes attractiveness, value, ease-of-use, trust, social support, diffusiveness, as well as fun and excitement. We illustrated how the framework works in practise by applying it to an online survey to assess 12 mobile Apps for well-being. The results showed that these influential factors did match the decisions on users' attitudes toward taking Apps into use. User feedback explained how people assessed the influential factors before actually using the applications.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127220586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Support for ecological sustainability is of growing interest and the over-consumption, production and disposal of foods are a major concern for sustainability, ethics and the economy. However, there is a deficit in current understandings of how technologies could be used within this area. In this paper we focus on food waste and report on a qualitative study to understand daily food practices around shopping planning, gardening, storing, cooking and throwing away food, and their relations to waste. The findings point to design-relevant factors such as losing sight and reordering; spatial, temporal and social constraints; trust and valuing food source; and busyness, unpredictability and effort. The main contribution of this paper is to understand food practices and in turn to present seven dimensions of visibility to draw out implications for designing mobile and ubiquitous technologies for this new arena for design. We also present a prototype evolving from our qualitative results, the mobile food waste diary.
{"title":"Creating visibility: understanding the design space for food waste","authors":"Eva Ganglbauer, G. Fitzpatrick, Georg Molzer","doi":"10.1145/2406367.2406369","DOIUrl":"https://doi.org/10.1145/2406367.2406369","url":null,"abstract":"Support for ecological sustainability is of growing interest and the over-consumption, production and disposal of foods are a major concern for sustainability, ethics and the economy. However, there is a deficit in current understandings of how technologies could be used within this area. In this paper we focus on food waste and report on a qualitative study to understand daily food practices around shopping planning, gardening, storing, cooking and throwing away food, and their relations to waste. The findings point to design-relevant factors such as losing sight and reordering; spatial, temporal and social constraints; trust and valuing food source; and busyness, unpredictability and effort. The main contribution of this paper is to understand food practices and in turn to present seven dimensions of visibility to draw out implications for designing mobile and ubiquitous technologies for this new arena for design. We also present a prototype evolving from our qualitative results, the mobile food waste diary.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126517926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Studies show that every fourth smartphone user watches videos on their device. However, because of increasing camera and encoding quality more and more smartphones are providing an attractive tool for creating and editing videos. The demand for smooth video browsing interfaces is challenged by the limited input and output capabilities that such mobile devices offer. This paper discusses a novel interface for fast and precise video browsing suitable for watching and editing videos. The browsing mechanism offers a simple but powerful interface for browsing videos at different levels of granularity. All interactions can be carried out with no modal changes at all. The interface is easy to understand and efficient to use. A first evaluation proves the suitability of the presented design for casual users as well as for creative professionals such as video editors.
{"title":"ProPane: fast and precise video browsing on mobile phones","authors":"Roman Ganhör","doi":"10.1145/2406367.2406392","DOIUrl":"https://doi.org/10.1145/2406367.2406392","url":null,"abstract":"Studies show that every fourth smartphone user watches videos on their device. However, because of increasing camera and encoding quality more and more smartphones are providing an attractive tool for creating and editing videos. The demand for smooth video browsing interfaces is challenged by the limited input and output capabilities that such mobile devices offer. This paper discusses a novel interface for fast and precise video browsing suitable for watching and editing videos. The browsing mechanism offers a simple but powerful interface for browsing videos at different levels of granularity. All interactions can be carried out with no modal changes at all. The interface is easy to understand and efficient to use. A first evaluation proves the suitability of the presented design for casual users as well as for creative professionals such as video editors.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130909397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an interactive thumbnail-based video browser for mobile devices such as smartphones featuring a touch screen. Developed as part of on-going research and supported by user studies, it introduces HiStory, a hierarchical storyboard design offering an interface metaphor that is familiar and intuitive yet supports fast and effective completion of Known-Item-Search tasks by rapidly providing an overview of a video's content with varying degrees of granularity.
{"title":"HiStory: a hierarchical storyboard interface design for video browsing on mobile devices","authors":"Wolfgang Hürst, Dimitrios Paris Darzentas","doi":"10.1145/2406367.2406389","DOIUrl":"https://doi.org/10.1145/2406367.2406389","url":null,"abstract":"This paper presents an interactive thumbnail-based video browser for mobile devices such as smartphones featuring a touch screen. Developed as part of on-going research and supported by user studies, it introduces HiStory, a hierarchical storyboard design offering an interface metaphor that is familiar and intuitive yet supports fast and effective completion of Known-Item-Search tasks by rapidly providing an overview of a video's content with varying degrees of granularity.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133930028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile phones and video game controllers using gesture recognition technologies enable easy and intuitive operations, such as scrolling a browser and drawing objects. However, usually only one of each kind of sensor is installed in a device, and the effect of multiple homogeneous sensors on recognition accuracy has not been investigated. Moreover, the effect of the differences in the motion of a gesture has not been examined. We have investigated the use of a test mobile device with nine accelerometers and nine gyroscopes. We have captured the data for 27 kinds of gestures for a mobile tablet. We experimentally investigated the effects on recognition accuracy of changing the number and positions of the sensors and of the number and kinds of gestures. The results showed that the use of multiple homogeneous sensors has zero or negligible effect on recognition accuracy, but that using an accelerometer along with a gyroscope improves recognition accuracy. They also showed that some gestures were not consistent among test subjects and interdependent, so selecting specific gestures to use can improve recognition accuracy.
{"title":"Evaluation study on sensor placement and gesture selection for mobile devices","authors":"Kazuya Murao, A. Yano, T. Terada, R. Matsukura","doi":"10.1145/2406367.2406376","DOIUrl":"https://doi.org/10.1145/2406367.2406376","url":null,"abstract":"Mobile phones and video game controllers using gesture recognition technologies enable easy and intuitive operations, such as scrolling a browser and drawing objects. However, usually only one of each kind of sensor is installed in a device, and the effect of multiple homogeneous sensors on recognition accuracy has not been investigated. Moreover, the effect of the differences in the motion of a gesture has not been examined. We have investigated the use of a test mobile device with nine accelerometers and nine gyroscopes. We have captured the data for 27 kinds of gestures for a mobile tablet. We experimentally investigated the effects on recognition accuracy of changing the number and positions of the sensors and of the number and kinds of gestures. The results showed that the use of multiple homogeneous sensors has zero or negligible effect on recognition accuracy, but that using an accelerometer along with a gyroscope improves recognition accuracy. They also showed that some gestures were not consistent among test subjects and interdependent, so selecting specific gestures to use can improve recognition accuracy.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129868682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interaction is repeatedly pointed out as a key enabling element towards more engaging and valuable public displays. Still, most digital public displays today do not support any interactive features. We believe that this is mainly due to the lack of efficient and clear abstractions that developers can use to incorporate interactivity into their applications. In this demo we present PuReWidgets, a toolkit that developers can use in their public display applications to support the interaction process across multiple display systems, without considering the specifics of what interaction modality will be used on each particular display. PuReWidgets provides high-level widgets to application programmers, and allows users to interact via various interaction mechanisms, such as graphical user interfaces for mobile devices, QR codes, SMS, etc.
{"title":"Creating web-based interactive public display applications with the PuReWidgets toolkit","authors":"Jorge C. S. Cardoso, R. Jose","doi":"10.1145/2406367.2406434","DOIUrl":"https://doi.org/10.1145/2406367.2406434","url":null,"abstract":"Interaction is repeatedly pointed out as a key enabling element towards more engaging and valuable public displays. Still, most digital public displays today do not support any interactive features. We believe that this is mainly due to the lack of efficient and clear abstractions that developers can use to incorporate interactivity into their applications. In this demo we present PuReWidgets, a toolkit that developers can use in their public display applications to support the interaction process across multiple display systems, without considering the specifics of what interaction modality will be used on each particular display. PuReWidgets provides high-level widgets to application programmers, and allows users to interact via various interaction mechanisms, such as graphical user interfaces for mobile devices, QR codes, SMS, etc.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128410745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent mobile phones allow users to perform a multitude of different tasks. Complexity of tasks challenges the design of mobile applications in many ways. For instance, the limited screen space of mobile devices allows only a small number of items to be displayed. Therefore, users often have to change the view or have to resize the displayed content (e.g., images or maps) to view the required information. We present the system MobIES, which allows users to extend the mobile interfaces of their mobiles phones using external screens. Users connect their mobile phone with an external display by holding it on the border of the external display. When the connection is established, the user interface of the currently active mobile application is distributed on the phone and the external screen. This enables users to take advantage of using existing screens in their environments and temporarily benefit from an extended screen space. In this paper, we discuss the concept of MobIES and present a prototype implementation.
{"title":"MobIES: extending mobile interfaces using external screens","authors":"Dennis Schneider, Julian Seifert, E. Rukzio","doi":"10.1145/2406367.2406438","DOIUrl":"https://doi.org/10.1145/2406367.2406438","url":null,"abstract":"Recent mobile phones allow users to perform a multitude of different tasks. Complexity of tasks challenges the design of mobile applications in many ways. For instance, the limited screen space of mobile devices allows only a small number of items to be displayed. Therefore, users often have to change the view or have to resize the displayed content (e.g., images or maps) to view the required information. We present the system MobIES, which allows users to extend the mobile interfaces of their mobiles phones using external screens. Users connect their mobile phone with an external display by holding it on the border of the external display. When the connection is established, the user interface of the currently active mobile application is distributed on the phone and the external screen. This enables users to take advantage of using existing screens in their environments and temporarily benefit from an extended screen space. In this paper, we discuss the concept of MobIES and present a prototype implementation.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123593868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides a description of the technical implementation of a complete software and hardware solution supporting guided factory tours in an industrial environment. It starts with an overview of the strategies applied to come up with a working solution satisfying given user requirements. The overall architecture of the system is shown based on design decisions elaborated during this process. Then, specific topics posing challenges during the implementation are described from the assessment of the current state of the art and possible implementation options to the final working solution. These topics include the organization of configurable content delivery, the selection and application of an augmented reality framework, and the creation of an audio broadcast solution for the tour guide. We conclude with a review of the practical application of the system and possible further development and future improvements.
{"title":"evoGuide: implementation of a tour guide support solution with multimedia and augmented-reality content","authors":"R. Hable, T. Rössler, Christina Schuller","doi":"10.1145/2406367.2406403","DOIUrl":"https://doi.org/10.1145/2406367.2406403","url":null,"abstract":"This paper provides a description of the technical implementation of a complete software and hardware solution supporting guided factory tours in an industrial environment. It starts with an overview of the strategies applied to come up with a working solution satisfying given user requirements. The overall architecture of the system is shown based on design decisions elaborated during this process. Then, specific topics posing challenges during the implementation are described from the assessment of the current state of the art and possible implementation options to the final working solution. These topics include the organization of configurable content delivery, the selection and application of an augmented reality framework, and the creation of an audio broadcast solution for the tour guide. We conclude with a review of the practical application of the system and possible further development and future improvements.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120962068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seamus Hickey, Minna Pakanen, Leena Arhippainen, Erno Kuusela, Antti Karhu
This paper describes an interactive 3D user interface which allows different real-time services to cooperate in order to complete a user task on a mobile tablet sized device. The UI is based on objectifying data and describing it in an ontological language and combing this with behavioral scripts. The 3D UI is used to visualize the different data and services, while providing a means for object selection and manipulation. Two use cases, a cinema and music service, have been implemented in a prototype, which demonstrates how a general 3D user interface can be used to provide added value to users. The prototype is also meant to advance the thinking of what 3D UIs can accomplish, for virtual environments and mobile augmented reality, by showing practical use cases in action.
{"title":"Service fusion: an interactive 3D user interface","authors":"Seamus Hickey, Minna Pakanen, Leena Arhippainen, Erno Kuusela, Antti Karhu","doi":"10.1145/2406367.2406432","DOIUrl":"https://doi.org/10.1145/2406367.2406432","url":null,"abstract":"This paper describes an interactive 3D user interface which allows different real-time services to cooperate in order to complete a user task on a mobile tablet sized device. The UI is based on objectifying data and describing it in an ontological language and combing this with behavioral scripts. The 3D UI is used to visualize the different data and services, while providing a means for object selection and manipulation. Two use cases, a cinema and music service, have been implemented in a prototype, which demonstrates how a general 3D user interface can be used to provide added value to users. The prototype is also meant to advance the thinking of what 3D UIs can accomplish, for virtual environments and mobile augmented reality, by showing practical use cases in action.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115559792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a model for landmark highlighting for pedestrian route guidance services for mobile devices. The model determines which landmarks are the most attractive based on their properties in the current context of user's orientation and the location on the route and highlights these landmarks on the mobile map. The attractiveness of a landmark is based on its visual, structural and semantic properties which are used for calculating the total attractiveness of a single landmark. This model was evaluated with voluntary users conducted in laboratory environment. Test subjects were shown images of street intersections from where they selected the most attractive and prominent landmarks in the route's context. We then compared these results with the landmarks selected by the model. The results show that landmarks highlighted by the model were the same ones that were selected by the participants as most salient landmarks.
{"title":"Model for landmark highlighting in mobile web services","authors":"Pekka Kallioniemi, M. Turunen","doi":"10.1145/2406367.2406398","DOIUrl":"https://doi.org/10.1145/2406367.2406398","url":null,"abstract":"We introduce a model for landmark highlighting for pedestrian route guidance services for mobile devices. The model determines which landmarks are the most attractive based on their properties in the current context of user's orientation and the location on the route and highlights these landmarks on the mobile map. The attractiveness of a landmark is based on its visual, structural and semantic properties which are used for calculating the total attractiveness of a single landmark. This model was evaluated with voluntary users conducted in laboratory environment. Test subjects were shown images of street intersections from where they selected the most attractive and prominent landmarks in the route's context. We then compared these results with the landmarks selected by the model. The results show that landmarks highlighted by the model were the same ones that were selected by the participants as most salient landmarks.","PeriodicalId":181563,"journal":{"name":"Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126741585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}