Pub Date : 2019-03-01DOI: 10.1109/PERCOMW.2019.8730731
L. Bedogni, Andrea Alcaras, L. Bononi
Mobile devices are carried by many individuals in the world, which use them to communicate with friends, browse the web, and use different applications depending on their objectives. Normally the devices are equipped with integrated sensors such as accelerometers and magnetometers, through which application developers can obtain the inertial values of the dynamics of the device, and infer different behaviors about what the user is performing. As users type on the touch keyboard with one hand, they also tilt the smartphone to reach the area to be pressed. In this paper, we show that using these zero-permissions sensors it is possible to obtain the area pressed by the user with more than 80% of accuracy in some scenarios. Moreover, correlating subsequent areas related to keyboard keys together, it is also possible to determine the words typed by the user, even for long words. This would help understanding what user are doing, though raising privacy concerns.
{"title":"Permission-free Keylogging through Touch Events Eavesdropping on Mobile Devices","authors":"L. Bedogni, Andrea Alcaras, L. Bononi","doi":"10.1109/PERCOMW.2019.8730731","DOIUrl":"https://doi.org/10.1109/PERCOMW.2019.8730731","url":null,"abstract":"Mobile devices are carried by many individuals in the world, which use them to communicate with friends, browse the web, and use different applications depending on their objectives. Normally the devices are equipped with integrated sensors such as accelerometers and magnetometers, through which application developers can obtain the inertial values of the dynamics of the device, and infer different behaviors about what the user is performing. As users type on the touch keyboard with one hand, they also tilt the smartphone to reach the area to be pressed. In this paper, we show that using these zero-permissions sensors it is possible to obtain the area pressed by the user with more than 80% of accuracy in some scenarios. Moreover, correlating subsequent areas related to keyboard keys together, it is also possible to determine the words typed by the user, even for long words. This would help understanding what user are doing, though raising privacy concerns.","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123643198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/PERCOMW.2019.8730740
Vu H. Tran, Archan Misra
Sensing is a crucial part of any cyber-physical system. Wearable device has its huge potential for sensing applications because it is worn on the user body. However, wearable sensing can cause obtrusiveness to the user. Obtrusiveness can be seen as a perception of a lack of usefulness [1] such as a lag in user interaction channel. In addition, being worn by a user, it is not connected to a power supply, and thus needs to be removed to be charged regularly. This can cause a nuisance to elderly or disabled people. However, there are also opportunities for wearable devices to be used to assist users in daily life activities. In my proposal, I propose three directions to make wearable sensing less obtrusive: (1) Reduce obtrusiveness in user interaction with the device, (2) reduce the obtrusiveness in powering the device, and (3) using wearable to reduce obtrusiveness in user interaction with the surrounding environment.
{"title":"Making Wearable Sensing Less Obtrusive","authors":"Vu H. Tran, Archan Misra","doi":"10.1109/PERCOMW.2019.8730740","DOIUrl":"https://doi.org/10.1109/PERCOMW.2019.8730740","url":null,"abstract":"Sensing is a crucial part of any cyber-physical system. Wearable device has its huge potential for sensing applications because it is worn on the user body. However, wearable sensing can cause obtrusiveness to the user. Obtrusiveness can be seen as a perception of a lack of usefulness [1] such as a lag in user interaction channel. In addition, being worn by a user, it is not connected to a power supply, and thus needs to be removed to be charged regularly. This can cause a nuisance to elderly or disabled people. However, there are also opportunities for wearable devices to be used to assist users in daily life activities. In my proposal, I propose three directions to make wearable sensing less obtrusive: (1) Reduce obtrusiveness in user interaction with the device, (2) reduce the obtrusiveness in powering the device, and (3) using wearable to reduce obtrusiveness in user interaction with the surrounding environment.","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122648908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/percomw.2019.8730748
A. Bernardos, Jesús García, Hideo Saito, P. Marti
{"title":"UNAGI'19 - Workshop on UNmanned aerial vehicle Applications in the Smart City: from Guidance technology to enhanced system Interaction - Welcome and Committees","authors":"A. Bernardos, Jesús García, Hideo Saito, P. Marti","doi":"10.1109/percomw.2019.8730748","DOIUrl":"https://doi.org/10.1109/percomw.2019.8730748","url":null,"abstract":"","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127989867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/percomw.2019.8730825
{"title":"MUSICAL'19: MUSICAL'19 - International Workshop on Mobile Ubiquitous Systems, Infrastructures, Communications and AppLications - Program","authors":"","doi":"10.1109/percomw.2019.8730825","DOIUrl":"https://doi.org/10.1109/percomw.2019.8730825","url":null,"abstract":"","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128949148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/PERCOMW.2019.8730799
Güven Aşçı, M. A. Güvensan
The capability of mobile phones are increasing with the development of hardware and software technology. Especially sensors on smartphones enable to collect environmental and personal information. Thus, with the help of smartphones, human activity recognition and transport mode detection (TMD) become the main research areas in the last decade. This study aims to introduce a novel input set for daily activities mainly for transportation modes in order to increase the detection rate. In this study, the frame-based novel input set consisting of time-domain and frequency-domain features is fed to LSTM network. Thus, the classification ratio on HTC public dataset for 10 different transportation modes is climbed up to 97% which is 2% more than the state-of-the-art method in the literature.
{"title":"A Novel Input Set for LSTM-Based Transport Mode Detection","authors":"Güven Aşçı, M. A. Güvensan","doi":"10.1109/PERCOMW.2019.8730799","DOIUrl":"https://doi.org/10.1109/PERCOMW.2019.8730799","url":null,"abstract":"The capability of mobile phones are increasing with the development of hardware and software technology. Especially sensors on smartphones enable to collect environmental and personal information. Thus, with the help of smartphones, human activity recognition and transport mode detection (TMD) become the main research areas in the last decade. This study aims to introduce a novel input set for daily activities mainly for transportation modes in order to increase the detection rate. In this study, the frame-based novel input set consisting of time-domain and frequency-domain features is fed to LSTM network. Thus, the classification ratio on HTC public dataset for 10 different transportation modes is climbed up to 97% which is 2% more than the state-of-the-art method in the literature.","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129564733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/PERCOMW.2019.8730783
Juan Ye
Sensor-based human activity recognition is to recognise users' current activities from a collection of sensor data in real time. This ability presents an unprecedented opportunity to many applications, and ambient assisted living (AAL) for elderly care is one of the most exciting examples. For example, from the meal preparation activities, we can derive the user's diet routine and detect any anomaly or decline in physical or cognitive condition, leading to immediate, appropriate change in their care plan. With the rapidly increasing ageing population and overstretched strains on our healthcare system, there is a rapidly growing need for industry in AAL. However, the complexity in real-world deployment is significantly challenging current sensor-based human activity recognition, including the inherent imperfect nature of sensing technologies, constant change in activity routines, and unpredictability of situations or events occurring in an environment. Such complexity can result in decreased accuracies in recognising activities over time and further a degradation of the performance of an AAL system. The state-of-the-art methodology in studying human activity recognition is cultivated from short-term lab or testbed experimentation, i.e., relying on well-annotated sensor data and assuming no change in activity models, which is no longer suitable for long-term, large-scale, real-world deployment. This creates a need for an activity recognition system capable of embedding the means of automatic adaptation to changes, i.e., lifelong learning. This talk will discuss new challenges and opportunities in lifelong learning in human activity recognition, with particular focus on transfer learning on activity labels across heterogeneous datasets.
{"title":"Lifelong Learning in Sensor-Based Human Activity Recognition","authors":"Juan Ye","doi":"10.1109/PERCOMW.2019.8730783","DOIUrl":"https://doi.org/10.1109/PERCOMW.2019.8730783","url":null,"abstract":"Sensor-based human activity recognition is to recognise users' current activities from a collection of sensor data in real time. This ability presents an unprecedented opportunity to many applications, and ambient assisted living (AAL) for elderly care is one of the most exciting examples. For example, from the meal preparation activities, we can derive the user's diet routine and detect any anomaly or decline in physical or cognitive condition, leading to immediate, appropriate change in their care plan. With the rapidly increasing ageing population and overstretched strains on our healthcare system, there is a rapidly growing need for industry in AAL. However, the complexity in real-world deployment is significantly challenging current sensor-based human activity recognition, including the inherent imperfect nature of sensing technologies, constant change in activity routines, and unpredictability of situations or events occurring in an environment. Such complexity can result in decreased accuracies in recognising activities over time and further a degradation of the performance of an AAL system. The state-of-the-art methodology in studying human activity recognition is cultivated from short-term lab or testbed experimentation, i.e., relying on well-annotated sensor data and assuming no change in activity models, which is no longer suitable for long-term, large-scale, real-world deployment. This creates a need for an activity recognition system capable of embedding the means of automatic adaptation to changes, i.e., lifelong learning. This talk will discuss new challenges and opportunities in lifelong learning in human activity recognition, with particular focus on transfer learning on activity labels across heterogeneous datasets.","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"269 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130238808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/PERCOMW.2019.8730842
Yu Nakayama, Kazuaki Honda, D. Hisano, K. Maruta
The spatio-temporal fluctuation in mobile traffic demand drastically deteriorates the efficiency and financial viability of conventional mobile networks. To address this problem, this paper proposes a concept of adaptive centralized radio access network (C-RAN) architecture for a smart city using crowdsourced radio units (CRUs). The proposed architecture contributes to the efficient deployment of mobile networks and the better use of energy in a smart city. The edge server computes the optimum CRU states and activate/deactivate them based on the traffic information measured by road side units. In this paper we present the basic idea and the evaluation results via numerical analysis.
{"title":"Adaptive C-RAN Architecture for Smart City using Crowdsourced Radio Units","authors":"Yu Nakayama, Kazuaki Honda, D. Hisano, K. Maruta","doi":"10.1109/PERCOMW.2019.8730842","DOIUrl":"https://doi.org/10.1109/PERCOMW.2019.8730842","url":null,"abstract":"The spatio-temporal fluctuation in mobile traffic demand drastically deteriorates the efficiency and financial viability of conventional mobile networks. To address this problem, this paper proposes a concept of adaptive centralized radio access network (C-RAN) architecture for a smart city using crowdsourced radio units (CRUs). The proposed architecture contributes to the efficient deployment of mobile networks and the better use of energy in a smart city. The edge server computes the optimum CRU states and activate/deactivate them based on the traffic information measured by road side units. In this paper we present the basic idea and the evaluation results via numerical analysis.","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128199609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/PERCOMW.2019.8730719
Gabriele Civitarese
With a growing population of elderly people, the number of subjects at risk of cognitive disorders is rapidly increasing. Many research groups are studying pervasive solutions to continuously and unobtrusively monitor fragile subjects in their homes. Clinicians are interested in monitoring several behavioral aspects for a wide variety of applications: early diagnosis, emergency monitoring, assessment of cognitive disorders, etcetera. Among the several behavioral aspects of interest, anomalous behaviors while performing activities of daily living (ADLs) are of great importance. Indeed, these anomalies can be indicators of cognitive decline. The recognition of such abnormal behaviors relies on robust and accurate ADLs recognition systems. Moreover, in order to enable unobtrusive and privacy-aware monitoring, environmental sensors in charge of unobtrusively capturing the interaction of the subject with the home infrastructure should be preferred. This talk presents our latest research efforts on these topics. In particular, the talk will cover: a) novel unobtrusive sensing solutions, b) hybrid ADLs recognition methods and c) techniques to detect abnormal behaviors at a fine granularity. We will discuss those challenges reporting our experience and identifying critical aspects which still need to be investigated.
{"title":"Human Activity Recognition in Smart-Home Environments for Health-Care Applications","authors":"Gabriele Civitarese","doi":"10.1109/PERCOMW.2019.8730719","DOIUrl":"https://doi.org/10.1109/PERCOMW.2019.8730719","url":null,"abstract":"With a growing population of elderly people, the number of subjects at risk of cognitive disorders is rapidly increasing. Many research groups are studying pervasive solutions to continuously and unobtrusively monitor fragile subjects in their homes. Clinicians are interested in monitoring several behavioral aspects for a wide variety of applications: early diagnosis, emergency monitoring, assessment of cognitive disorders, etcetera. Among the several behavioral aspects of interest, anomalous behaviors while performing activities of daily living (ADLs) are of great importance. Indeed, these anomalies can be indicators of cognitive decline. The recognition of such abnormal behaviors relies on robust and accurate ADLs recognition systems. Moreover, in order to enable unobtrusive and privacy-aware monitoring, environmental sensors in charge of unobtrusively capturing the interaction of the subject with the home infrastructure should be preferred. This talk presents our latest research efforts on these topics. In particular, the talk will cover: a) novel unobtrusive sensing solutions, b) hybrid ADLs recognition methods and c) techniques to detect abnormal behaviors at a fine granularity. We will discuss those challenges reporting our experience and identifying critical aspects which still need to be investigated.","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128489239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/percomw.2019.8730839
{"title":"EmotionAware'19: EmotionAware'19 – 3rd International Workshop on Emotion Awareness for Pervasive Computing with Mobile and Wearable Devices - Program","authors":"","doi":"10.1109/percomw.2019.8730839","DOIUrl":"https://doi.org/10.1109/percomw.2019.8730839","url":null,"abstract":"","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124269117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/PERCOMW.2019.8730712
Yutaka Arakawa
With the progress of ubiquitous computing, computers/machines can understand various human contexts via various sensors. A wearable device is possible to estimate calories burned, fatigue degree, even QoL (Quality of Life) by analyzing the heart rate, steps, sleep quality, etc. Simultaneously, the significant progress of deep learning brought the drastic performance improvement in not only image recognition, but also speech processing and natural language. Nowadays, it will become a reality that the humanoid robot instantaneously recognizes what is shown in the camera image and speaks the human-like sentences with human-like voice in multiple languages. Therefore, a collaboration between human and machines have already started. In a call center, AI chatbot has already worked to handle the typical Inquiries on behalf of human operators. A smartwatch and activity trackers keep monitoring owner's physical states and sometimes make an intervention for improving the owner's health. We are also developing the digital signage that persuades the passing person to change his/her behavior to a better way. However, there is still a distance between actual human-to-human interaction and machine-to-human interaction. That means that there is some context information that the machine side is not yet aware of. For example, while human beings observe a slight change of facial expression and body gestures, they change their way of talking and tone, but the machine can not take such information (emotion, agreement, etc.) into consideration when it generates a dialogue. In my keynote, I would like to widely introduce the leading-edge research on context recognition in the research area of ubiquitous computing. Then, I explain the requirements on how next - generation dialogue system should be. In the next-generation dialogue system, it is natural to change the content of conversation and the state of utterance according to the recognized context, for example, the conversation content will change according to the number of steps and stress situation during dialogue. Finally, we discuss technical issues required for integrating spoken dialogue system with ubiquitous computing.
{"title":"Integration of spoken dialogue system and ubiquitous computing","authors":"Yutaka Arakawa","doi":"10.1109/PERCOMW.2019.8730712","DOIUrl":"https://doi.org/10.1109/PERCOMW.2019.8730712","url":null,"abstract":"With the progress of ubiquitous computing, computers/machines can understand various human contexts via various sensors. A wearable device is possible to estimate calories burned, fatigue degree, even QoL (Quality of Life) by analyzing the heart rate, steps, sleep quality, etc. Simultaneously, the significant progress of deep learning brought the drastic performance improvement in not only image recognition, but also speech processing and natural language. Nowadays, it will become a reality that the humanoid robot instantaneously recognizes what is shown in the camera image and speaks the human-like sentences with human-like voice in multiple languages. Therefore, a collaboration between human and machines have already started. In a call center, AI chatbot has already worked to handle the typical Inquiries on behalf of human operators. A smartwatch and activity trackers keep monitoring owner's physical states and sometimes make an intervention for improving the owner's health. We are also developing the digital signage that persuades the passing person to change his/her behavior to a better way. However, there is still a distance between actual human-to-human interaction and machine-to-human interaction. That means that there is some context information that the machine side is not yet aware of. For example, while human beings observe a slight change of facial expression and body gestures, they change their way of talking and tone, but the machine can not take such information (emotion, agreement, etc.) into consideration when it generates a dialogue. In my keynote, I would like to widely introduce the leading-edge research on context recognition in the research area of ubiquitous computing. Then, I explain the requirements on how next - generation dialogue system should be. In the next-generation dialogue system, it is natural to change the content of conversation and the state of utterance according to the recognized context, for example, the conversation content will change according to the number of steps and stress situation during dialogue. Finally, we discuss technical issues required for integrating spoken dialogue system with ubiquitous computing.","PeriodicalId":437017,"journal":{"name":"2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128780320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}