Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献
We present a generative adversarial network (GAN) approach to recognising modes of transportation from smartphone motion sensor data, as part of our contribution to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020 as team noname. Our approach identifies the location where the smartphone of the test dataset is carried on the body through heuristics, after which a location-specific model is trained based on the available published data at this location. Performance on the validation data is 0.95, which we expect to be very similar on the test set, if our estimation of the location of the phone on the test set is correct. We are highly confident in this location estimation. If however it were wrong, an accuracy as low as 30% could be expected.
{"title":"Smartphone location identification and transport mode recognition using an ensemble of generative adversarial networks","authors":"Lukas Günthermann, Ivor Simpson, D. Roggen","doi":"10.1145/3410530.3414353","DOIUrl":"https://doi.org/10.1145/3410530.3414353","url":null,"abstract":"We present a generative adversarial network (GAN) approach to recognising modes of transportation from smartphone motion sensor data, as part of our contribution to the Sussex-Huawei Locomotion-Transportation (SHL) recognition challenge 2020 as team noname. Our approach identifies the location where the smartphone of the test dataset is carried on the body through heuristics, after which a location-specific model is trained based on the available published data at this location. Performance on the validation data is 0.95, which we expect to be very similar on the test set, if our estimation of the location of the phone on the test set is correct. We are highly confident in this location estimation. If however it were wrong, an accuracy as low as 30% could be expected.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79003937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Wang, H. Gjoreski, Mathias Ciliberto, P. Lago, Kazuya Murao, Tsuyoshi Okita, D. Roggen
In this paper we summarize the contributions of participants to the third Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp/ISWC 2020. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial sensor data of a smartphone in a user-independent manner with an unknown target phone position. The training data of a "train" user is available from smartphones placed at four body positions (Hand, Torso, Bag and Hips). The testing data originates from "test" users with a smartphone placed at one, but unknown, body position. We introduce the dataset used in the challenge and the protocol of the competition. We present a meta-analysis of the contributions from 15 submissions, their approaches, the software tools used, computational cost and the achieved results. Overall, one submission achieved F1 scores above 80%, three with F1 scores between 70% and 80%, seven between 50% and 70%, and four below 50%, with a latency of maximum of 5 seconds.
{"title":"Summary of the sussex-huawei locomotion-transportation recognition challenge 2020","authors":"Lin Wang, H. Gjoreski, Mathias Ciliberto, P. Lago, Kazuya Murao, Tsuyoshi Okita, D. Roggen","doi":"10.1145/3410530.3414341","DOIUrl":"https://doi.org/10.1145/3410530.3414341","url":null,"abstract":"In this paper we summarize the contributions of participants to the third Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp/ISWC 2020. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial sensor data of a smartphone in a user-independent manner with an unknown target phone position. The training data of a \"train\" user is available from smartphones placed at four body positions (Hand, Torso, Bag and Hips). The testing data originates from \"test\" users with a smartphone placed at one, but unknown, body position. We introduce the dataset used in the challenge and the protocol of the competition. We present a meta-analysis of the contributions from 15 submissions, their approaches, the software tools used, computational cost and the achieved results. Overall, one submission achieved F1 scores above 80%, three with F1 scores between 70% and 80%, seven between 50% and 70%, and four below 50%, with a latency of maximum of 5 seconds.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79443735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Mohammadi, Farzan Shenavarmasouleh, M. Amini, H. Arabnia
Malware detection has become a challenging task due to the increase in the number of malware families. Universal malware detection algorithms that can detect all the malware families are needed to make the whole process feasible. However, the more universal an algorithm is, the higher number of feature dimensions it needs to work with, and that inevitably causes the emerging problem of Curse of Dimensionality (CoD). Besides, it is also difficult to make this solution work due to the real-time behavior of malware analysis. In this paper, we address this problem and aim to propose a feature selection based malware detection algorithm using an evolutionary algorithm that is referred to as Artificial Bee Colony (ABC). The proposed algorithm enables researchers to decrease the feature dimension and as a result, boost the process of malware detection. The experimental results reveal that the proposed method outperforms the state-of-the-art.
{"title":"Malware detection using artificial bee colony algorithm","authors":"F. Mohammadi, Farzan Shenavarmasouleh, M. Amini, H. Arabnia","doi":"10.1145/3410530.3414598","DOIUrl":"https://doi.org/10.1145/3410530.3414598","url":null,"abstract":"Malware detection has become a challenging task due to the increase in the number of malware families. Universal malware detection algorithms that can detect all the malware families are needed to make the whole process feasible. However, the more universal an algorithm is, the higher number of feature dimensions it needs to work with, and that inevitably causes the emerging problem of Curse of Dimensionality (CoD). Besides, it is also difficult to make this solution work due to the real-time behavior of malware analysis. In this paper, we address this problem and aim to propose a feature selection based malware detection algorithm using an evolutionary algorithm that is referred to as Artificial Bee Colony (ABC). The proposed algorithm enables researchers to decrease the feature dimension and as a result, boost the process of malware detection. The experimental results reveal that the proposed method outperforms the state-of-the-art.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"107 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81420475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional defect classification of TFT-LCD array processing leaned on human decision-maker in which visual inspection used to categorize defects and consequently identify the rout-causes of defects. In practice, the main sources of defects in the TFT-LCD array process are particles. Due to the huge size of the machinery and production tools in the TFT-LCD array process, the sensor allocation for particle detection plays a critical role in the inadequacy and quality of sensor data. Therefore, where the adequacy and efficiency of human performance depend on human factors, emotion, and level of attention, this study aims to design a semi-automatic defect detection and classification method based on information capture by particle detector sensors to reduce the cognitive load devaluation and proceed with the process of defect classification.
{"title":"How particle detector can aid visual inspection for defect detection of TFT-LCD manufacturing","authors":"M. Khakifirooz, M. Fathi","doi":"10.1145/3410530.3414596","DOIUrl":"https://doi.org/10.1145/3410530.3414596","url":null,"abstract":"Traditional defect classification of TFT-LCD array processing leaned on human decision-maker in which visual inspection used to categorize defects and consequently identify the rout-causes of defects. In practice, the main sources of defects in the TFT-LCD array process are particles. Due to the huge size of the machinery and production tools in the TFT-LCD array process, the sensor allocation for particle detection plays a critical role in the inadequacy and quality of sensor data. Therefore, where the adequacy and efficiency of human performance depend on human factors, emotion, and level of attention, this study aims to design a semi-automatic defect detection and classification method based on information capture by particle detector sensors to reduce the cognitive load devaluation and proceed with the process of defect classification.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85616259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although activity recognition has been studied considerably for the last two decades, it is still not so easy to handle complicated activity classes in a specific domain. The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data aims to explore a part of those complicated activities by focusing on the nurse caring. Our team, "UCLab", found that the main problem in the challenge is the imbalance and unevenness of the dataset, each of which often happens in real-field data. Considering the problem, we approached the challenge using a Random Forest-based method with multiple preprocessing to classify 12 activity modes. Our approach consists of the following steps: We first preprocessed the acceleration data to obtain uniformly sampled signals. Then we extracted acceleration data with respect to each row of the given label data and extracted feature values. We adopted Random Forest for classification and performed post-processing to the predicted data obtained from the classifier. As a result, we obtained 51.5% accuracy with the trial-based evaluation.
{"title":"Nurse care activity recognition challenge: a comparative verification of multiple preprocessing approaches","authors":"Hitoshi Matsuyama, Takuto Yoshida, Nozomi Hayashida, Yuto Fukushima, Takuro Yonezawa, Nobuo Kawaguchi","doi":"10.1145/3410530.3414333","DOIUrl":"https://doi.org/10.1145/3410530.3414333","url":null,"abstract":"Although activity recognition has been studied considerably for the last two decades, it is still not so easy to handle complicated activity classes in a specific domain. The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data aims to explore a part of those complicated activities by focusing on the nurse caring. Our team, \"UCLab\", found that the main problem in the challenge is the imbalance and unevenness of the dataset, each of which often happens in real-field data. Considering the problem, we approached the challenge using a Random Forest-based method with multiple preprocessing to classify 12 activity modes. Our approach consists of the following steps: We first preprocessed the acceleration data to obtain uniformly sampled signals. Then we extracted acceleration data with respect to each row of the given label data and extracted feature values. We adopted Random Forest for classification and performed post-processing to the predicted data obtained from the classifier. As a result, we obtained 51.5% accuracy with the trial-based evaluation.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85540240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simultaneous alcohol and marijuana (SAM) use can significantly impact young adults' physical and mental well-being. While SAM use is becoming increasingly prevalent in this population, there has not been much work to monitor and understand related behaviors and contexts. We aim to address this gap by using smartwatches to collect ecological momentary assessments (EMAs) and sensor data. In this paper, we describe the design and development of the smartwatch framework focusing on SAM use. We also collected pilot data from an n=1 deployment over 7 days using the framework. Our findings indicate that EMAs on smartwatches can be completed with lower perceived burden, which is important for longitudinal SAM use data collection. We also provide design guidelines and rationale for future work aiming to use smartwatches.
{"title":"WatchOver","authors":"Sahiti Kunchay, Saeed Abdullah","doi":"10.1145/3410530.3414373","DOIUrl":"https://doi.org/10.1145/3410530.3414373","url":null,"abstract":"Simultaneous alcohol and marijuana (SAM) use can significantly impact young adults' physical and mental well-being. While SAM use is becoming increasingly prevalent in this population, there has not been much work to monitor and understand related behaviors and contexts. We aim to address this gap by using smartwatches to collect ecological momentary assessments (EMAs) and sensor data. In this paper, we describe the design and development of the smartwatch framework focusing on SAM use. We also collected pilot data from an n=1 deployment over 7 days using the framework. Our findings indicate that EMAs on smartwatches can be completed with lower perceived burden, which is important for longitudinal SAM use data collection. We also provide design guidelines and rationale for future work aiming to use smartwatches.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78295680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We designed and developed DOOM (Adversarial-DRL based Opcode level Obfuscator to generate Metamorphic malware), a novel system that uses adversarial deep reinforcement learning to obfuscate malware at the op-code level for the enhancement of IDS. The ultimate goal of DOOM is not to give a potent weapon in the hands of cyber-attackers, but to create defensive-mechanisms against advanced zero-day attacks. Experimental results indicate that the obfuscated malware created by DOOM could effectively mimic multiple-simultaneous zero-day attacks. To the best of our knowledge, DOOM is the first system that could generate obfuscated malware detailed to individual op-code level. DOOM is also the first-ever system to use efficient continuous action control based deep reinforcement learning in the area of malware generation and defense. Experimental results indicate that over 67% of the metamorphic malware generated by DOOM could easily evade detection from even the most potent IDS. This achievement gains significance, as with this, even IDS augment with advanced routing sub-system can be easily evaded by the malware generated by DOOM.
{"title":"DOOM: a novel adversarial-DRL-based op-code level metamorphic malware obfuscator for the enhancement of IDS","authors":"Mohit Sewak, S. Sahay, Hemant Rathore","doi":"10.1145/3410530.3414411","DOIUrl":"https://doi.org/10.1145/3410530.3414411","url":null,"abstract":"We designed and developed DOOM (Adversarial-DRL based Opcode level Obfuscator to generate Metamorphic malware), a novel system that uses adversarial deep reinforcement learning to obfuscate malware at the op-code level for the enhancement of IDS. The ultimate goal of DOOM is not to give a potent weapon in the hands of cyber-attackers, but to create defensive-mechanisms against advanced zero-day attacks. Experimental results indicate that the obfuscated malware created by DOOM could effectively mimic multiple-simultaneous zero-day attacks. To the best of our knowledge, DOOM is the first system that could generate obfuscated malware detailed to individual op-code level. DOOM is also the first-ever system to use efficient continuous action control based deep reinforcement learning in the area of malware generation and defense. Experimental results indicate that over 67% of the metamorphic malware generated by DOOM could easily evade detection from even the most potent IDS. This achievement gains significance, as with this, even IDS augment with advanced routing sub-system can be easily evaded by the malware generated by DOOM.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82155333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Sabik Irbaz, Abir Azad, Tanjila Alam Sathi, Lutfun Nahar Lota
Sensor-based human activity recognition has become one of the challenging and emerging research areas. Several machine learning algorithm with appropriate feature extraction has been used to solve human activity recognition task. However, recent research mainly focused on various deep learning algorithms, our focus of this study is measuring the performance of traditional machine learning algorithms with the incorporation of frequency-domain features. Because deep learning methods require a high computational cost. In this paper, we used Naive Bayes, K-Nearest Neighbour, SVM, Random Forest and Multilayer Perceptron with necessary feature extraction for our experimentation. We achieved best performance for K-Nearest Neighbour. Our experiment was a part of "The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data" followed by the team MoonShot_BD. We concluded that with proper feature extraction, machine learning techniques may be useful to solve activity recognition with a low computational cost.
{"title":"Nurse care activity recognition based on machine learning techniques using accelerometer data","authors":"Mohammad Sabik Irbaz, Abir Azad, Tanjila Alam Sathi, Lutfun Nahar Lota","doi":"10.1145/3410530.3414339","DOIUrl":"https://doi.org/10.1145/3410530.3414339","url":null,"abstract":"Sensor-based human activity recognition has become one of the challenging and emerging research areas. Several machine learning algorithm with appropriate feature extraction has been used to solve human activity recognition task. However, recent research mainly focused on various deep learning algorithms, our focus of this study is measuring the performance of traditional machine learning algorithms with the incorporation of frequency-domain features. Because deep learning methods require a high computational cost. In this paper, we used Naive Bayes, K-Nearest Neighbour, SVM, Random Forest and Multilayer Perceptron with necessary feature extraction for our experimentation. We achieved best performance for K-Nearest Neighbour. Our experiment was a part of \"The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data\" followed by the team MoonShot_BD. We concluded that with proper feature extraction, machine learning techniques may be useful to solve activity recognition with a low computational cost.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76558786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fine-grained human activities recognition focuses on recognizing event- or action-level activities, which enables a new set of Internet-of-Things (IoT) applications such as behavior analysis. Prior work on fine-grained human activities recognition relies on supervised sensing, which makes the fine-grained labeling labor-intensive and difficult to scale up. On the other hand, it is much more practical to collect coarse-grained label at the level of activity of daily living (e.g., cooking, working), especially for real-world IoT systems. In this paper, we present a framework that learns fine-grained human activities recognition with coarse-grained labeled and a small amount of fine-grained labeled multi-modal data. Our system leverages the implicit physical knowledge on the hierarchy of the coarse- and fine-grained labels and conducts data-driven hierarchical learning that take into account the coarse-grained supervised prediction for fine-grained semi-supervised learning. We evaluated our framework and CFR-TSVM algorithm on the data gathered from real-world experiments. Results show that our CFR-TSVM achieved an 81% recognition accuracy over 10 fine-grained activities, which reduces the prediction error of the semi-supervised learning baseline TSVM by half.
{"title":"Fine-grained activities recognition with coarse-grained labeled multi-modal data","authors":"Zhizhang Hu, Tong Yu, Yue Zhang, Shijia Pan","doi":"10.1145/3410530.3414320","DOIUrl":"https://doi.org/10.1145/3410530.3414320","url":null,"abstract":"Fine-grained human activities recognition focuses on recognizing event- or action-level activities, which enables a new set of Internet-of-Things (IoT) applications such as behavior analysis. Prior work on fine-grained human activities recognition relies on supervised sensing, which makes the fine-grained labeling labor-intensive and difficult to scale up. On the other hand, it is much more practical to collect coarse-grained label at the level of activity of daily living (e.g., cooking, working), especially for real-world IoT systems. In this paper, we present a framework that learns fine-grained human activities recognition with coarse-grained labeled and a small amount of fine-grained labeled multi-modal data. Our system leverages the implicit physical knowledge on the hierarchy of the coarse- and fine-grained labels and conducts data-driven hierarchical learning that take into account the coarse-grained supervised prediction for fine-grained semi-supervised learning. We evaluated our framework and CFR-TSVM algorithm on the data gathered from real-world experiments. Results show that our CFR-TSVM achieved an 81% recognition accuracy over 10 fine-grained activities, which reduces the prediction error of the semi-supervised learning baseline TSVM by half.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87993941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuuki Nishiyama, Denzil Ferreira, Wataru Sasaki, T. Okoshi, J. Nakazawa, A. Dey, K. Sezaki
Mobile Crowd Sensing (MCS) is a method for collecting multiple sensor data from distributed mobile devices for understanding social and behavioral phenomena. The method requires collecting the sensor data 24/7, ideally inconspicuously to minimize bias. Although several MCS tools for collecting the sensor data from an off-the-shelf smartphone are proposed and evaluated under controlled conditions as a benchmark, the performance in a practical sensing study condition is scarce, especially on iOS. In this paper, we assess the data collection quality of AWARE iOS, installed on off-the-shelf iOS smartphones with 9 participants for a week. Our analysis shows that more than 97% of sensor data, provided by hardware sensors (i.e., accelerometer, location, and pedometer sensor), is successfully collected in real-world conditions, unless a user explicitly quits our data collection application.
{"title":"Using iOS for inconspicuous data collection: a real-world assessment","authors":"Yuuki Nishiyama, Denzil Ferreira, Wataru Sasaki, T. Okoshi, J. Nakazawa, A. Dey, K. Sezaki","doi":"10.1145/3410530.3414369","DOIUrl":"https://doi.org/10.1145/3410530.3414369","url":null,"abstract":"Mobile Crowd Sensing (MCS) is a method for collecting multiple sensor data from distributed mobile devices for understanding social and behavioral phenomena. The method requires collecting the sensor data 24/7, ideally inconspicuously to minimize bias. Although several MCS tools for collecting the sensor data from an off-the-shelf smartphone are proposed and evaluated under controlled conditions as a benchmark, the performance in a practical sensing study condition is scarce, especially on iOS. In this paper, we assess the data collection quality of AWARE iOS, installed on off-the-shelf iOS smartphones with 9 participants for a week. Our analysis shows that more than 97% of sensor data, provided by hardware sensors (i.e., accelerometer, location, and pedometer sensor), is successfully collected in real-world conditions, unless a user explicitly quits our data collection application.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88395142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers