There are many benefits to mediating intimate relationships through technology, and an increasing number of ways of doing so. Among these, there is a growing interest in social wearables. But most of these devices are either bespoke one-off items or generalized and lack consideration for cultural context and needs of varied user groups. Overall, our understanding of the design criteria for these artifacts and potential implications of their newly-afforded multifaceted interactions is lagging far behind. My research aims to extend this knowledge by adopting multidisciplinary perspective and developing design guidelines with a focus on meaningful use of social wearables over time.
{"title":"Designing social wearables for mediation of intimate relationships","authors":"Yulia Silina","doi":"10.1145/2957265.2963111","DOIUrl":"https://doi.org/10.1145/2957265.2963111","url":null,"abstract":"There are many benefits to mediating intimate relationships through technology, and an increasing number of ways of doing so. Among these, there is a growing interest in social wearables. But most of these devices are either bespoke one-off items or generalized and lack consideration for cultural context and needs of varied user groups. Overall, our understanding of the design criteria for these artifacts and potential implications of their newly-afforded multifaceted interactions is lagging far behind. My research aims to extend this knowledge by adopting multidisciplinary perspective and developing design guidelines with a focus on meaningful use of social wearables over time.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122291804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Webtoon is a popular content in South Korea that has more fun techniques by using both IT and cartoon elements. However, the rating system for webtoon is still unsatisfying which have limitations on comprehending users' unconscious behavior. In this paper, we explore the value of using users' laughter reaction data for humor webtoons. Users' laughter reaction data and the rating scores were extracted simultaneously in user observation. As a result, the laughter reaction significantly correlates with the manual rating score. Also, we elicited each participants' flow of laughter which enabled to understand their laughter behavior and scenes that were attractive. With those data, ideation was conducted to generate ideas on how laughter reaction data can be used in new ways for humor webtoons. Thus, we proposed the potential values that suggest viable solutions of capturing laughter reactions for humor webtoons.
{"title":"What makes readers laugh?: value of sensing laughter for humor webtoon","authors":"Soyoung Kwon, Kun-Pyo Lee","doi":"10.1145/2957265.2961850","DOIUrl":"https://doi.org/10.1145/2957265.2961850","url":null,"abstract":"Webtoon is a popular content in South Korea that has more fun techniques by using both IT and cartoon elements. However, the rating system for webtoon is still unsatisfying which have limitations on comprehending users' unconscious behavior. In this paper, we explore the value of using users' laughter reaction data for humor webtoons. Users' laughter reaction data and the rating scores were extracted simultaneously in user observation. As a result, the laughter reaction significantly correlates with the manual rating score. Also, we elicited each participants' flow of laughter which enabled to understand their laughter behavior and scenes that were attractive. With those data, ideation was conducted to generate ideas on how laughter reaction data can be used in new ways for humor webtoons. Thus, we proposed the potential values that suggest viable solutions of capturing laughter reactions for humor webtoons.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124120258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunlong Wang, Le Duan, Simon Butscher, Jens Müller, Harald Reiterer
Personalized and contextual interventions are promising techniques for mobile persuasive technologies in mobile health. In this paper, we propose the "fingerprints" technique to analyze the users' daily behavior patterns to find the meaningful moments to better support mobile persuasive technologies, especially mobile health interventions. We assume that for many persons, their behaviors have patterns and can be detected through the sensor data from smartphones. We develop a three-step interactive machine learning workflow to describe the concept and approach of the "fingerprints" technique. By this we aim to implement a practical and light-weight mobile intervention system without burdening the users with manual logging. In our feasibility study, we show results that provide first insights into the design of the "fingerprints" technique.
{"title":"Fingerprints: detecting meaningful moments for mobile health intervention","authors":"Yunlong Wang, Le Duan, Simon Butscher, Jens Müller, Harald Reiterer","doi":"10.1145/2957265.2965006","DOIUrl":"https://doi.org/10.1145/2957265.2965006","url":null,"abstract":"Personalized and contextual interventions are promising techniques for mobile persuasive technologies in mobile health. In this paper, we propose the \"fingerprints\" technique to analyze the users' daily behavior patterns to find the meaningful moments to better support mobile persuasive technologies, especially mobile health interventions. We assume that for many persons, their behaviors have patterns and can be detected through the sensor data from smartphones. We develop a three-step interactive machine learning workflow to describe the concept and approach of the \"fingerprints\" technique. By this we aim to implement a practical and light-weight mobile intervention system without burdening the users with manual logging. In our feasibility study, we show results that provide first insights into the design of the \"fingerprints\" technique.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124133610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human beings sense and perceive most of the world through their eyes. The point of gaze clearly reflects our visual attention indicating our interests. Hence gaze can be used as a powerful tool in different research areas (e.g., marketing, psychology). The progress made over the years in eye tracking enables the creation of gaze-based interactive interfaces. However, these interfaces lack of generic usability outside a controlled environment in a spontaneous pervasive way. The main objective of this research is to investigate eye-tracking technologies by means of calibration. Since calibration is user, location, orientation and target dependent, it prevents from Multi-User interaction and gaze estimation on multiple various objects (e.g., multiple screens of different sizes). Tackling these issues, new mobile as well as remote interfaces are explored and new design spaces are opened.
{"title":"Methods for calibration free and multi-user eye tracking","authors":"Christian Lander","doi":"10.1145/2957265.2963116","DOIUrl":"https://doi.org/10.1145/2957265.2963116","url":null,"abstract":"Human beings sense and perceive most of the world through their eyes. The point of gaze clearly reflects our visual attention indicating our interests. Hence gaze can be used as a powerful tool in different research areas (e.g., marketing, psychology). The progress made over the years in eye tracking enables the creation of gaze-based interactive interfaces. However, these interfaces lack of generic usability outside a controlled environment in a spontaneous pervasive way. The main objective of this research is to investigate eye-tracking technologies by means of calibration. Since calibration is user, location, orientation and target dependent, it prevents from Multi-User interaction and gaze estimation on multiple various objects (e.g., multiple screens of different sizes). Tackling these issues, new mobile as well as remote interfaces are explored and new design spaces are opened.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122289141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Baretta, F. Sartori, A. Greco, R. Melen, Fabio Stella, L. Bollini, M. D'addario, P. Steca
Physical activity (PA) is considered one of the most important factors for the prevention and management of non-communicable diseases (NCDs). Mobile technologies offer several opportunities for supporting PA, especially if combined with psychological aspects, model-based reasoning systems and personalized human computer interaction. This still on-going research aims at developing a scalable framework that targets PA promotion among both clinical and non-clinical population, exploiting Bayesian Networks and Expert Systems to characterize and predict qualitative variables like self-efficacy. The expected outcomes are the collection and management of real-time behavioral and psychological data to define a personalized strategy for increasing PA.
{"title":"Wearable devices and AI techniques integration to promote physical activity","authors":"D. Baretta, F. Sartori, A. Greco, R. Melen, Fabio Stella, L. Bollini, M. D'addario, P. Steca","doi":"10.1145/2957265.2965011","DOIUrl":"https://doi.org/10.1145/2957265.2965011","url":null,"abstract":"Physical activity (PA) is considered one of the most important factors for the prevention and management of non-communicable diseases (NCDs). Mobile technologies offer several opportunities for supporting PA, especially if combined with psychological aspects, model-based reasoning systems and personalized human computer interaction. This still on-going research aims at developing a scalable framework that targets PA promotion among both clinical and non-clinical population, exploiting Bayesian Networks and Expert Systems to characterize and predict qualitative variables like self-efficacy. The expected outcomes are the collection and management of real-time behavioral and psychological data to define a personalized strategy for increasing PA.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124093289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hsin-Ruey Tsai, Min-Chieh Hsiu, Jui-Chun Hsiao, Lee-Ting Huang, Mike Y. Chen, Y. Hung
We propose a finger-worn touch device TouchRing to provide subtle and multi-touch input. TouchRing leverages printed electrodes and the capacitive sensing technique to detect touch input. It allows users to perform multi-touch gestures in one hand to increase input modality. TouchRing worn on the index finger allows multi-touch using the thumb and middle finger. Ten multi-touch gestures are designed in this paper. We also propose touch detection and gesture recognition approaches in TouchRing. Gesture Recognition accuracy is evaluated in the user study. Applications for TouchRing are also proposed to make controlling smart glasses more convenient.
{"title":"TouchRing: subtle and always-available input using a multi-touch ring","authors":"Hsin-Ruey Tsai, Min-Chieh Hsiu, Jui-Chun Hsiao, Lee-Ting Huang, Mike Y. Chen, Y. Hung","doi":"10.1145/2957265.2961860","DOIUrl":"https://doi.org/10.1145/2957265.2961860","url":null,"abstract":"We propose a finger-worn touch device TouchRing to provide subtle and multi-touch input. TouchRing leverages printed electrodes and the capacitive sensing technique to detect touch input. It allows users to perform multi-touch gestures in one hand to increase input modality. TouchRing worn on the index finger allows multi-touch using the thumb and middle finger. Ten multi-touch gestures are designed in this paper. We also propose touch detection and gesture recognition approaches in TouchRing. Gesture Recognition accuracy is evaluated in the user study. Applications for TouchRing are also proposed to make controlling smart glasses more convenient.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"282 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131617333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic 3D models of indoor scenes enable compelling interior design applications such as remodeling, refurnishing and rearrangement of furniture. However, creating these models is still a challenging task. Most existing approaches are designed to work ex-situ or out of context, and rely on the modeler's memory, photographs or measurements from the scene. We propose a novel in-situ, mobile capture system that leverages quick and easy semantic input from the user and offloads tedious reconstruction and modeling tasks to the computer. In this way, our system combines the advantages of automatic and manual CAD based methods to significantly reduce modeling time and effort. Our approach runs on commodity mobile devices and can potentially scale to a much larger audience of casual mobile phone users.
{"title":"In-situ semantic 3D modeling","authors":"Aditya Sankar","doi":"10.1145/2957265.2963109","DOIUrl":"https://doi.org/10.1145/2957265.2963109","url":null,"abstract":"Semantic 3D models of indoor scenes enable compelling interior design applications such as remodeling, refurnishing and rearrangement of furniture. However, creating these models is still a challenging task. Most existing approaches are designed to work ex-situ or out of context, and rely on the modeler's memory, photographs or measurements from the scene. We propose a novel in-situ, mobile capture system that leverages quick and easy semantic input from the user and offloads tedious reconstruction and modeling tasks to the computer. In this way, our system combines the advantages of automatic and manual CAD based methods to significantly reduce modeling time and effort. Our approach runs on commodity mobile devices and can potentially scale to a much larger audience of casual mobile phone users.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132214462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Kultsova, R. Romanenko, I. Zhukova, A. Usov, Nikita Penskoy, Tatiana Potapova
This paper describes the mobile application 'Travel and Communication Assistant' which supports the mobility and communication of people with Intellectual and Development Disabilities (IDD). This application provides the possibility to people with IDD to independently perform a known route (for example a route from home to the day care center, from home to the baker's, etc.) under the remote supervision of their caregivers and to communicate with them using text, voice and pictogram messages.
{"title":"Assistive mobile application for support of mobility and communication of people with IDD","authors":"M. Kultsova, R. Romanenko, I. Zhukova, A. Usov, Nikita Penskoy, Tatiana Potapova","doi":"10.1145/2957265.2965003","DOIUrl":"https://doi.org/10.1145/2957265.2965003","url":null,"abstract":"This paper describes the mobile application 'Travel and Communication Assistant' which supports the mobility and communication of people with Intellectual and Development Disabilities (IDD). This application provides the possibility to people with IDD to independently perform a known route (for example a route from home to the day care center, from home to the baker's, etc.) under the remote supervision of their caregivers and to communicate with them using text, voice and pictogram messages.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123843694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We demonstrate a crowd-powered model for the early diagnosis of stroke using a mobile device. The simple approach consists of monitoring the subject's health in three simple steps including the smile test for facial weakness, raising hands test for arm weakness and speech test for slurring of speech. Our demonstrated system shows a performance accuracy of 87.5% over a total number of 40 test cases.
{"title":"mSTROKE: a crowd-powered mobility towards stroke recognition","authors":"Richa Tibrewal, Ankita Singh, M. Bhattacharyya","doi":"10.1145/2957265.2961831","DOIUrl":"https://doi.org/10.1145/2957265.2961831","url":null,"abstract":"We demonstrate a crowd-powered model for the early diagnosis of stroke using a mobile device. The simple approach consists of monitoring the subject's health in three simple steps including the smile test for facial weakness, raising hands test for arm weakness and speech test for slurring of speech. Our demonstrated system shows a performance accuracy of 87.5% over a total number of 40 test cases.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116543982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giuseppe Ghiani, Marco Manca, F. Paternò, C. Santoro
The design and development of flexible applications able to match the many possible user needs and provide high quality user experience is still a major issue. In ambient-assisted living scenarios there is the need of giving adequate support to elderly so that they can independently live at home. Thus, providing personalized assistance is particularly critical because ageing people often have different ranges of individual needs, requirements and disabilities. In this position paper we introduce a solution based on an End-User Development environment that allows patients and caregivers to tailor the context-dependent behaviour of their Web applications in order to facilitate patients' life. This is done through the specification of trigger-action rules to support application customization.
{"title":"End-user personalization of context-dependent applications in AAL scenarios","authors":"Giuseppe Ghiani, Marco Manca, F. Paternò, C. Santoro","doi":"10.1145/2957265.2965005","DOIUrl":"https://doi.org/10.1145/2957265.2965005","url":null,"abstract":"The design and development of flexible applications able to match the many possible user needs and provide high quality user experience is still a major issue. In ambient-assisted living scenarios there is the need of giving adequate support to elderly so that they can independently live at home. Thus, providing personalized assistance is particularly critical because ageing people often have different ranges of individual needs, requirements and disabilities. In this position paper we introduce a solution based on an End-User Development environment that allows patients and caregivers to tailor the context-dependent behaviour of their Web applications in order to facilitate patients' life. This is done through the specification of trigger-action rules to support application customization.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125218965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}