Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献
The Sussex-Huawei Locomotion-Transportation (SHL) Challenge 2020 was an open competition of recognizing eight different activities that had been performed by three individual users and participants of this competition were tasked to classify these eight different activities with modes of locomotion and transportation. This year's data was recorded with a smartphone which was located in four different body positions. The primary challenge was to make a user-invariant as well as position-invariant classification model. The train set consisted of data from only user-1 with all positions whereas the test set consisted of data from user 2 and 3 with unspeicified sensor position. Moreover, a small validation with the same charecteristics of the test set was given to validate the classifier. In this paper, we have described our (Team Red Circle) approach in which we have used previous year's challenge data as well as this year's provided data to make our training dataset and validation set that have helped us to make our model generative. In our approach, we have extracted various types of features to make our model user independent and position invariant, we have applied Random Forest classifier which is a classical machine learning algorithm and achieved 92.69% accuracy on our customized train set and 77.04% accuracy on our customized validation set.
{"title":"UPIC","authors":"Md. Sadman Siraj, Md. Ahasan Atick Faisal, Omar Shahid, Farhan Fuad Abir, Tahera Hossain, Sozo Inoue, Md Atiqur Rahman Ahad","doi":"10.1145/3410530.3414343","DOIUrl":"https://doi.org/10.1145/3410530.3414343","url":null,"abstract":"The Sussex-Huawei Locomotion-Transportation (SHL) Challenge 2020 was an open competition of recognizing eight different activities that had been performed by three individual users and participants of this competition were tasked to classify these eight different activities with modes of locomotion and transportation. This year's data was recorded with a smartphone which was located in four different body positions. The primary challenge was to make a user-invariant as well as position-invariant classification model. The train set consisted of data from only user-1 with all positions whereas the test set consisted of data from user 2 and 3 with unspeicified sensor position. Moreover, a small validation with the same charecteristics of the test set was given to validate the classifier. In this paper, we have described our (Team Red Circle) approach in which we have used previous year's challenge data as well as this year's provided data to make our training dataset and validation set that have helped us to make our model generative. In our approach, we have extracted various types of features to make our model user independent and position invariant, we have applied Random Forest classifier which is a classical machine learning algorithm and achieved 92.69% accuracy on our customized train set and 77.04% accuracy on our customized validation set.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81289810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grace Chee, Trevor Cobb, Katarina Richter-Lunn, Irmandy Wicaksono, B. Freedman
Doze is an on-skin, hydrogel-based sleep mask which seeks to improve, enhance, and augment sleep through the use of programmed scent diffusion in tune with the user's cortical rhythms. Taking advantage of hydrogels' unique properties, the Doze mask encapsulates and emits therapeutic scents at a regulated pace. The release of scent is controlled by an embedded heater within the layers of the mask and communicates remotely to a smart device. This communication allows for a personalized dosage release based on the user's biometric or contextual data. Investigating both the pervasive power of smell in enhancing sleep as well as natural topical remedies, this personalized mask explores the potential for unintrusive solutions to the evergrowing rarity of a good night's sleep.
{"title":"Doze","authors":"Grace Chee, Trevor Cobb, Katarina Richter-Lunn, Irmandy Wicaksono, B. Freedman","doi":"10.1145/3410530.3414407","DOIUrl":"https://doi.org/10.1145/3410530.3414407","url":null,"abstract":"Doze is an on-skin, hydrogel-based sleep mask which seeks to improve, enhance, and augment sleep through the use of programmed scent diffusion in tune with the user's cortical rhythms. Taking advantage of hydrogels' unique properties, the Doze mask encapsulates and emits therapeutic scents at a regulated pace. The release of scent is controlled by an embedded heater within the layers of the mask and communicates remotely to a smart device. This communication allows for a personalized dosage release based on the user's biometric or contextual data. Investigating both the pervasive power of smell in enhancing sleep as well as natural topical remedies, this personalized mask explores the potential for unintrusive solutions to the evergrowing rarity of a good night's sleep.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75890678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hashini Senaratne, K. Ellis, S. Oviatt, Glenn Melvin
Leg bouncing is assumed to be related to anxiety, engrossment, boredom, excitement, fatigue, impatience, and disinterest. Objective detection of this behaviour would enable researching its relation to different mental and emotional states. However, differentiating this behaviour from other movements is less studied. Also, it is less known which sensor placements are best for such detection. We collected recordings of everyday movements, including leg bouncing, from six leg bouncers using tri-axial accelerometers at three leg positions. Using a Random Forest Classifier and data collected at the ankle, we could obtain a 90% accuracy in the classification of the recorded everyday movements. Further, we obtained a 94% accuracy in classifying four types of leg bouncing. Based on the subjects' opinion on leg bouncing patterns and experience with wearables, we discuss future research opportunities in this domain.
{"title":"Detecting and differentiating leg bouncing behaviour from everyday movements using tri-axial accelerometer data","authors":"Hashini Senaratne, K. Ellis, S. Oviatt, Glenn Melvin","doi":"10.1145/3410530.3414388","DOIUrl":"https://doi.org/10.1145/3410530.3414388","url":null,"abstract":"Leg bouncing is assumed to be related to anxiety, engrossment, boredom, excitement, fatigue, impatience, and disinterest. Objective detection of this behaviour would enable researching its relation to different mental and emotional states. However, differentiating this behaviour from other movements is less studied. Also, it is less known which sensor placements are best for such detection. We collected recordings of everyday movements, including leg bouncing, from six leg bouncers using tri-axial accelerometers at three leg positions. Using a Random Forest Classifier and data collected at the ankle, we could obtain a 90% accuracy in the classification of the recorded everyday movements. Further, we obtained a 94% accuracy in classifying four types of leg bouncing. Based on the subjects' opinion on leg bouncing patterns and experience with wearables, we discuss future research opportunities in this domain.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84555046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's mobile video users have unsatisfactory quality of experience mainly due to the large network distance to the centralized infrastructure. To improve users' quality of experience, content providers are pushing content distribution capacity to the edge-networks. However, existing content replication approaches cannot provide sufficient quality of experience for mobile video delivery. Because they fail to consider the knowledge of user-behavior such as user preference and mobility, which can capture the dynamically changing content popularity. To address the problem, we propose a user-behavior driven collaborative edge-network content replication solution in which user preference and mobility are jointly considered. More specifically, using user-bahavior driven measurement studies of videos and trajectories, we first reveal that both users' intrinsic preferences and mobility patterns play a significant role in edge-network content delivery. Second, based on the measurement insights, it is proposed that a joint user preference- and mobility-based collaborative edge-network content replication solution, namely APRank. It is comprised of preference-based demand prediction to predict the requests of video content, mobility-based collaboration to predict the movement of users across edge access points (APs), and workload-based collaboration to enables collaborative replication across adjacent APs. APRank is able to predict the fine-grained content popularity distribution of each AP, handle the trajectory data sparseness problem, and make dynamic and collaborative content replication for edge APs. Finally, through extensive trace-driven experiments, we demonstrate the effectiveness of our design: APRank achieves 20% less content access latency and 32% less workload against traditional approaches.
{"title":"Collaborative edge-network content replication: a joint user preference and mobility approach","authors":"Ge Ma, Qiyang Huang, Weixi Gu","doi":"10.1145/3410530.3414593","DOIUrl":"https://doi.org/10.1145/3410530.3414593","url":null,"abstract":"Today's mobile video users have unsatisfactory quality of experience mainly due to the large network distance to the centralized infrastructure. To improve users' quality of experience, content providers are pushing content distribution capacity to the edge-networks. However, existing content replication approaches cannot provide sufficient quality of experience for mobile video delivery. Because they fail to consider the knowledge of user-behavior such as user preference and mobility, which can capture the dynamically changing content popularity. To address the problem, we propose a user-behavior driven collaborative edge-network content replication solution in which user preference and mobility are jointly considered. More specifically, using user-bahavior driven measurement studies of videos and trajectories, we first reveal that both users' intrinsic preferences and mobility patterns play a significant role in edge-network content delivery. Second, based on the measurement insights, it is proposed that a joint user preference- and mobility-based collaborative edge-network content replication solution, namely APRank. It is comprised of preference-based demand prediction to predict the requests of video content, mobility-based collaboration to predict the movement of users across edge access points (APs), and workload-based collaboration to enables collaborative replication across adjacent APs. APRank is able to predict the fine-grained content popularity distribution of each AP, handle the trajectory data sparseness problem, and make dynamic and collaborative content replication for edge APs. Finally, through extensive trace-driven experiments, we demonstrate the effectiveness of our design: APRank achieves 20% less content access latency and 32% less workload against traditional approaches.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83658639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The homework for low-grade pupils often contains simple arithmetic problems, i.e., four arithmetic operations. To evaluate the learning quality of pupils, teachers and parents often need to check the homework manually, which is time and labor consuming. In this paper, we propose a homework auto-checking system HmwkCheck, which checks the four arithmetic operations automatically. Specifically, HmwkCheck utilizes the embedded camera of a smartphone to capture the homework as an image, and then processes the image in the smartphone to detect, segment and recognize both printed characters and handwritten characters. We implement HmwkCheck in an Android smartphone. The experiment results show that HmwkCheck can check homework efficiently, i.e., the average precision, recall and F1-score of character recognition achieve 94.03%, 93.41% and 93.72%, respectively.
{"title":"HmwkCheck","authors":"Lingyu Zhang, Yafeng Yin, Linfu Xie, Sanglu Lu","doi":"10.1145/3410530.3414393","DOIUrl":"https://doi.org/10.1145/3410530.3414393","url":null,"abstract":"The homework for low-grade pupils often contains simple arithmetic problems, i.e., four arithmetic operations. To evaluate the learning quality of pupils, teachers and parents often need to check the homework manually, which is time and labor consuming. In this paper, we propose a homework auto-checking system HmwkCheck, which checks the four arithmetic operations automatically. Specifically, HmwkCheck utilizes the embedded camera of a smartphone to capture the homework as an image, and then processes the image in the smartphone to detect, segment and recognize both printed characters and handwritten characters. We implement HmwkCheck in an Android smartphone. The experiment results show that HmwkCheck can check homework efficiently, i.e., the average precision, recall and F1-score of character recognition achieve 94.03%, 93.41% and 93.72%, respectively.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87372692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piotr Kotlinski, Xi-Jing Chang, Chih-Yun Yang, Wei-Chen Chiu, Yung-Ju Chang
It would be hard to overstate the importance of Computer Vision (CV), applications of which can be found from self-driving cars, through facial recognition to augmented reality and the healthcare industry. Recent years have witnessed dramatic progress in visual-object recognition, partially ascribable to the availability of labeled data. Unfortunately, recognition of obscure, unclear and ambiguous photos that are taken from unusual angles or distances remains a major challenge, as recently shown by the creation of the ObjectNet [1]. This paper complements that work via a game in which obscure, unclear and ambiguous photos are collaboratively created and labeled by the players, who adopt the role of detectives collecting evidence against in-game criminals. The game rules enforce the creation of images that are challenging to identify for CV and people alike, as a means of ensuring the high quality of players' input.
{"title":"Using gamification to create and label photos that are challenging for computer vision and people","authors":"Piotr Kotlinski, Xi-Jing Chang, Chih-Yun Yang, Wei-Chen Chiu, Yung-Ju Chang","doi":"10.1145/3410530.3414420","DOIUrl":"https://doi.org/10.1145/3410530.3414420","url":null,"abstract":"It would be hard to overstate the importance of Computer Vision (CV), applications of which can be found from self-driving cars, through facial recognition to augmented reality and the healthcare industry. Recent years have witnessed dramatic progress in visual-object recognition, partially ascribable to the availability of labeled data. Unfortunately, recognition of obscure, unclear and ambiguous photos that are taken from unusual angles or distances remains a major challenge, as recently shown by the creation of the ObjectNet [1]. This paper complements that work via a game in which obscure, unclear and ambiguous photos are collaboratively created and labeled by the players, who adopt the role of detectives collecting evidence against in-game criminals. The game rules enforce the creation of images that are challenging to identify for CV and people alike, as a means of ensuring the high quality of players' input.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72710532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior interruptibility research has focused on identifying interruptible or opportune moments for users to handle notifications. Yet, users may not want to attend to all notifications even at these moments. Research has shown that users' current practices for selective attendance are through speculating about notification sources. Yet, sometimes the above information is insufficient, making speculations difficult. This paper describes the first research attempt to examine how well a machine learning model can predict the moments when users would incorrectly speculate the sender of a notification. We built a machine learning model that can achieve an recall: 84.39%, precision: 56.78%, and F1-score of 0.68. We also show that important features for predicting these moments.
{"title":"A preliminary attempt of an intelligent system predicting users' correctness of notifications' sender speculation","authors":"Tang-Jie Chang, Jian-Hua Jiang Chen, Hao-Ping Lee, Yung-Ju Chang","doi":"10.1145/3410530.3414390","DOIUrl":"https://doi.org/10.1145/3410530.3414390","url":null,"abstract":"Prior interruptibility research has focused on identifying interruptible or opportune moments for users to handle notifications. Yet, users may not want to attend to all notifications even at these moments. Research has shown that users' current practices for selective attendance are through speculating about notification sources. Yet, sometimes the above information is insufficient, making speculations difficult. This paper describes the first research attempt to examine how well a machine learning model can predict the moments when users would incorrectly speculate the sender of a notification. We built a machine learning model that can achieve an recall: 84.39%, precision: 56.78%, and F1-score of 0.68. We also show that important features for predicting these moments.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75277767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Aliyev, Bo Zhou, Peter Hevesi, Marco Hirsch, P. Lukowicz
This work demonstrates a connected smart helmet platform, HeadgearX, aimed at improving personnel safety and real-time monitoring of construction sites. The smart helmet hardware design is driven by flexible and expandable sensing and actuating capabilities to adapt to various workplace requirements and functionalities. In our demonstrator, the system consists of ten different sensors, visual and haptic feedback mechanism, and Bluetooth connectivity. A companion Android application is also developed to add further functionalities including those configurable over-the-air. The construction project supervisors can monitor all on-site personnel's real-time statuses from a central web server which communicates to individual HeadgearX helmets via the companion app. Several use case scenarios are demonstrated as examples, while further specific functionalities can be added into HeadgearX by either software re-configurations with the existing system or hardware modifications.
{"title":"HeadgearX","authors":"A. Aliyev, Bo Zhou, Peter Hevesi, Marco Hirsch, P. Lukowicz","doi":"10.1145/3410530.3414326","DOIUrl":"https://doi.org/10.1145/3410530.3414326","url":null,"abstract":"This work demonstrates a connected smart helmet platform, HeadgearX, aimed at improving personnel safety and real-time monitoring of construction sites. The smart helmet hardware design is driven by flexible and expandable sensing and actuating capabilities to adapt to various workplace requirements and functionalities. In our demonstrator, the system consists of ten different sensors, visual and haptic feedback mechanism, and Bluetooth connectivity. A companion Android application is also developed to add further functionalities including those configurable over-the-air. The construction project supervisors can monitor all on-site personnel's real-time statuses from a central web server which communicates to individual HeadgearX helmets via the companion app. Several use case scenarios are demonstrated as examples, while further specific functionalities can be added into HeadgearX by either software re-configurations with the existing system or hardware modifications.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73507699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Push-notifications are a design tool used by mobile and web apps to alert subscribers to new information. In recent years, due to widespread adoption of the technology and the shrinking level of user attention available, marketing techniques have been deployed to persuade subscribers to engage positively with notifications. One such technique, known as the curiosity gap, exploits Lowenstein's Information-Gap theory. This paper explores the impact of enticing notification text, instilled by the curiosity gap, on subsequent engagement actions. A classifier was defined to identify enticing language in notifications. Features commonly paired with enticing text were identified. Intelligent notification delivery agents, trained using data captured in-the-wild, were evaluated using enticing and non-enticing notifications to demonstrate the influence of enticing text. Additionally, a solution was proposed and briefly evaluated for limiting subscriber susceptibility to enticing notifications.
{"title":"Enticing notification text & the impact on engagement","authors":"Kieran Fraser, Owen Conlan","doi":"10.1145/3410530.3414430","DOIUrl":"https://doi.org/10.1145/3410530.3414430","url":null,"abstract":"Push-notifications are a design tool used by mobile and web apps to alert subscribers to new information. In recent years, due to widespread adoption of the technology and the shrinking level of user attention available, marketing techniques have been deployed to persuade subscribers to engage positively with notifications. One such technique, known as the curiosity gap, exploits Lowenstein's Information-Gap theory. This paper explores the impact of enticing notification text, instilled by the curiosity gap, on subsequent engagement actions. A classifier was defined to identify enticing language in notifications. Features commonly paired with enticing text were identified. Intelligent notification delivery agents, trained using data captured in-the-wild, were evaluated using enticing and non-enticing notifications to demonstrate the influence of enticing text. Additionally, a solution was proposed and briefly evaluated for limiting subscriber susceptibility to enticing notifications.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75823735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A memorable city exploration experience requires some unexpected surprises. For pedestrians exploring in city blocks, ordinary route planning and navigation system cannot meet the need of interesting exploration and would even miss the possible surprises on the way. In order to enhance resident user experience in city exploration, we designed a gamified exploratory navigation system. Our system would engage the user when they are close to a point of interest (POI) by proposing interactive activities and "conversing" with them. We conducted preliminary field experiment with 5 participants to evaluate our system and observe how mobile technology and navigation system are practical used in city exploration. We hope our study could provide some reflecting for the further design of these kinds of services and systems which would engage residents in exploring the city and strengthen the connection with the city.
{"title":"Gamified navigation system: enhancing resident user experience in city exploration","authors":"Yiyi Zhang, Tatsuoki Nakajima","doi":"10.1145/3410530.3414405","DOIUrl":"https://doi.org/10.1145/3410530.3414405","url":null,"abstract":"A memorable city exploration experience requires some unexpected surprises. For pedestrians exploring in city blocks, ordinary route planning and navigation system cannot meet the need of interesting exploration and would even miss the possible surprises on the way. In order to enhance resident user experience in city exploration, we designed a gamified exploratory navigation system. Our system would engage the user when they are close to a point of interest (POI) by proposing interactive activities and \"conversing\" with them. We conducted preliminary field experiment with 5 participants to evaluate our system and observe how mobile technology and navigation system are practical used in city exploration. We hope our study could provide some reflecting for the further design of these kinds of services and systems which would engage residents in exploring the city and strengthen the connection with the city.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73966079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers