A. Doherty, Wilby Williamson, M. Hillsdon, Steve Hodges, C. Foster, P. Kelly
BACKGROUND: The growing global burden of noncommunicable diseases makes it important to monitor and influence a range of health-related behaviours such as diet and physical activity Wearable cameras appear to record and reveal many of these behaviours in more accessible ways. However, having determined opportunities for improvement, most health-related interventions fail to result in lasting changes. AIM: To assess the use of wearable cameras as part of a behaviour change strategy and consider ethical implications of their use. METHODS: We examine relevant principles from behavioural science theory and consider the way images enhance or change the processes which underpin behaviour change. We propose ways for researchers to instigate the use of and engagement with these images to lead to more effective and long-lasting behaviour change interventions. We also consider the ethical implications of using digital life-logging technologies in these ways. We discuss the potential harms and benefits of such approaches for both the wearer and those they meet. DISCUSSION: Future behaviour change strategies based on self-monitoring could consider the use of wearable cameras. It is important that such work considers the ethical implications of this research and adheres to accepted guidelines and principles.
{"title":"Influencing health-related behaviour with wearable cameras: strategies & ethical considerations","authors":"A. Doherty, Wilby Williamson, M. Hillsdon, Steve Hodges, C. Foster, P. Kelly","doi":"10.1145/2526667.2526677","DOIUrl":"https://doi.org/10.1145/2526667.2526677","url":null,"abstract":"BACKGROUND: The growing global burden of noncommunicable diseases makes it important to monitor and influence a range of health-related behaviours such as diet and physical activity Wearable cameras appear to record and reveal many of these behaviours in more accessible ways. However, having determined opportunities for improvement, most health-related interventions fail to result in lasting changes.\u0000 AIM: To assess the use of wearable cameras as part of a behaviour change strategy and consider ethical implications of their use.\u0000 METHODS: We examine relevant principles from behavioural science theory and consider the way images enhance or change the processes which underpin behaviour change. We propose ways for researchers to instigate the use of and engagement with these images to lead to more effective and long-lasting behaviour change interventions. We also consider the ethical implications of using digital life-logging technologies in these ways. We discuss the potential harms and benefits of such approaches for both the wearer and those they meet.\u0000 DISCUSSION: Future behaviour change strategies based on self-monitoring could consider the use of wearable cameras. It is important that such work considers the ethical implications of this research and adheres to accepted guidelines and principles.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129838026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ambiently and automatically maintaining a lifelog is an activity that may help individuals track their lifestyle, learning, health and productivity. In this paper we motivate and discuss the technical challenges of developing real-world lifelogging solutions, based on seven years of experience. The gathering, organisation, retrieval and presentation challenges of large-scale lifelogging are discussed and we show how this can be achieved and the benefits that may accrue.
{"title":"Exploring the technical challenges of large-scale lifelogging","authors":"C. Gurrin, A. Smeaton, Zhengwei Qiu, A. Doherty","doi":"10.1145/2526667.2526678","DOIUrl":"https://doi.org/10.1145/2526667.2526678","url":null,"abstract":"Ambiently and automatically maintaining a lifelog is an activity that may help individuals track their lifestyle, learning, health and productivity. In this paper we motivate and discuss the technical challenges of developing real-world lifelogging solutions, based on seven years of experience. The gathering, organisation, retrieval and presentation challenges of large-scale lifelogging are discussed and we show how this can be achieved and the benefits that may accrue.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114747555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jylana L. Sheats, S. Winter, Priscilla Padilla-Romero, Lisa Goldman-Rosas, Lauren A. Grieco, A. King
Assessments designed to measure features of the built environment are challenging and have traditionally been conducted by trained researchers. The purpose of this study was to explore and compare both the feasibility and utility of having community residents use two different technological devices to assess their neighborhood built environment features: the Stanford Healthy Neighborhood Discovery Tool (which allows users to thoughtfully take photographs) and the SenseCam (which automatically takes photographs). Consented participants were low income, tech-naïve, Latino adolescents aged 11 to 14 years (n=8), and older adults aged 63 to 80 years (n=7) from North Fair Oaks, California. Participants used the devices while on a "usual" 45 to 60 minute walk through their neighborhood. Photos from each device were reviewed, coded, categorized into themes, and compared. Perceptual data regarding the use of the SenseCam were available for 15 participants and SenseCam photographs were available for 7 participants. There were 1,678 photos automatically captured by the SenseCam compared to 112 photos taken by participants with the Discovery Tool. Of the original 1,678 SenseCam photos there were 68 in which researchers coded built environment features that were not captured by the community residents using the Discovery Tool. Forty-two (62%) of these photos were of positive features; and 26 (38%) were of negative features. The SenseCam captured a greater number of images with positive features that were not captured by adolescents via the Discovery Tool; as well as a greater number of negative features not captured by the older adults via the Discovery Tool. There were two environmental elements (graffiti, dogs) captured by the Discovery Tool though not the SenseCam. Overall, study participants were receptive to both devices and indicated that they would be interested in using them again for a longer period of time. Older adults reported more positive perceptions about the SenseCam than adolescents. While the sample was small, study results indicate that the SenseCam may be useful in capturing built environment features that affect physical activity but that community residents don't notice, perhaps because they are habituated to certain conditions in their neighborhoods. The results suggest that this type of habituation may have different valences (positive or negative) for different age groups. Given the impact the built environment has on physical activity, particularly in low-income communities, further research regarding the use of the SenseCam to passively gather built environment data in tech-naïve populations is warranted.
{"title":"Comparison of passive versus active photo capture of built environment features by technology naïve Latinos using the SenseCam and Stanford healthy neighborhood discovery tool","authors":"Jylana L. Sheats, S. Winter, Priscilla Padilla-Romero, Lisa Goldman-Rosas, Lauren A. Grieco, A. King","doi":"10.1145/2526667.2526669","DOIUrl":"https://doi.org/10.1145/2526667.2526669","url":null,"abstract":"Assessments designed to measure features of the built environment are challenging and have traditionally been conducted by trained researchers. The purpose of this study was to explore and compare both the feasibility and utility of having community residents use two different technological devices to assess their neighborhood built environment features: the Stanford Healthy Neighborhood Discovery Tool (which allows users to thoughtfully take photographs) and the SenseCam (which automatically takes photographs). Consented participants were low income, tech-naïve, Latino adolescents aged 11 to 14 years (n=8), and older adults aged 63 to 80 years (n=7) from North Fair Oaks, California. Participants used the devices while on a \"usual\" 45 to 60 minute walk through their neighborhood. Photos from each device were reviewed, coded, categorized into themes, and compared. Perceptual data regarding the use of the SenseCam were available for 15 participants and SenseCam photographs were available for 7 participants. There were 1,678 photos automatically captured by the SenseCam compared to 112 photos taken by participants with the Discovery Tool. Of the original 1,678 SenseCam photos there were 68 in which researchers coded built environment features that were not captured by the community residents using the Discovery Tool. Forty-two (62%) of these photos were of positive features; and 26 (38%) were of negative features. The SenseCam captured a greater number of images with positive features that were not captured by adolescents via the Discovery Tool; as well as a greater number of negative features not captured by the older adults via the Discovery Tool. There were two environmental elements (graffiti, dogs) captured by the Discovery Tool though not the SenseCam. Overall, study participants were receptive to both devices and indicated that they would be interested in using them again for a longer period of time. Older adults reported more positive perceptions about the SenseCam than adolescents. While the sample was small, study results indicate that the SenseCam may be useful in capturing built environment features that affect physical activity but that community residents don't notice, perhaps because they are habituated to certain conditions in their neighborhoods. The results suggest that this type of habituation may have different valences (positive or negative) for different age groups. Given the impact the built environment has on physical activity, particularly in low-income communities, further research regarding the use of the SenseCam to passively gather built environment data in tech-naïve populations is warranted.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133563049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable computer vision systems provide plenty of opportunities to develop human assistive devices. This work contributes on visual scene understanding techniques using a helmet-mounted omnidirectional vision system. The goal is to extract semantic information of the environment, such as the type of environment being traversed or the basic 3D layout of the place, to build assistive navigation systems. We propose a novel line-based image global descriptor that encloses the structure of the scene observed. This descriptor is designed with omnidirectional imagery in mind, where observed lines are longer than in conventional images. Our experiments show that the proposed descriptor can be used for indoor scene recognition comparing its results to state-of-the-art global descriptors. Besides, we demonstrate additional advantages of particular interest for wearable vision systems: higher robustness to rotation, compactness, and easier integration with other scene understanding steps.
{"title":"Line image signature for scene understanding with a wearable vision system","authors":"A. Rituerto, A. C. Murillo, J. J. Guerrero","doi":"10.1145/2526667.2526670","DOIUrl":"https://doi.org/10.1145/2526667.2526670","url":null,"abstract":"Wearable computer vision systems provide plenty of opportunities to develop human assistive devices. This work contributes on visual scene understanding techniques using a helmet-mounted omnidirectional vision system. The goal is to extract semantic information of the environment, such as the type of environment being traversed or the basic 3D layout of the place, to build assistive navigation systems. We propose a novel line-based image global descriptor that encloses the structure of the scene observed. This descriptor is designed with omnidirectional imagery in mind, where observed lines are longer than in conventional images. Our experiments show that the proposed descriptor can be used for indoor scene recognition comparing its results to state-of-the-art global descriptors. Besides, we demonstrate additional advantages of particular interest for wearable vision systems: higher robustness to rotation, compactness, and easier integration with other scene understanding steps.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131186447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the findings from an interview with CG, an individual who has worn an automated camera, the SenseCam, every day for the past seven years. Of interest to the study were the participant's day-to-day experiences wearing the camera and whether these had changed since first wearing the camera. The findings presented outline the effect that wearing the camera has on his self-identity, relationships and interactions with people in the public. Issues relating to data capture, transfer and retrieval of lifelog images are also identified. These experiences inform us of the long-term effects of digital life capture and how lifelogging could progress in the future.
{"title":"Experiencing SenseCam: a case study interview exploring seven years living with a wearable camera","authors":"Niamh Caprani, N. O’Connor, C. Gurrin","doi":"10.1145/2526667.2526676","DOIUrl":"https://doi.org/10.1145/2526667.2526676","url":null,"abstract":"This paper presents the findings from an interview with CG, an individual who has worn an automated camera, the SenseCam, every day for the past seven years. Of interest to the study were the participant's day-to-day experiences wearing the camera and whether these had changed since first wearing the camera. The findings presented outline the effect that wearing the camera has on his self-identity, relationships and interactions with people in the public. Issues relating to data capture, transfer and retrieval of lifelog images are also identified. These experiences inform us of the long-term effects of digital life capture and how lifelogging could progress in the future.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katherine Ellis, S. Godbole, Jacqueline Chen, S. Marshall, Gert R. G. Lanckriet, J. Kerr
Machine learning techniques are used to improve accelerometer-based measures of physical activity. Most studies have used laboratory-collected data to develop algorithms to classify behaviors, but studies of free-living activity are needed to improve the ecological validity of these methods. With this aim, we collected a novel free-living dataset that uses SenseCams to obtain ground-truth annotations of physical activities. We trained a classifier on free-living data and compare it to a classifier trained on prescribed activities. The classifier predicts five activity classes: bicycling, riding in a vehicle, sitting, standing, and walking/running. When testing on free-living data, classifiers trained on free-living data significantly outperform those trained on a controlled dataset (89.2% vs. 70.9% accuracy).
{"title":"Physical activity recognition in free-living from body-worn sensors","authors":"Katherine Ellis, S. Godbole, Jacqueline Chen, S. Marshall, Gert R. G. Lanckriet, J. Kerr","doi":"10.1145/2526667.2526685","DOIUrl":"https://doi.org/10.1145/2526667.2526685","url":null,"abstract":"Machine learning techniques are used to improve accelerometer-based measures of physical activity. Most studies have used laboratory-collected data to develop algorithms to classify behaviors, but studies of free-living activity are needed to improve the ecological validity of these methods. With this aim, we collected a novel free-living dataset that uses SenseCams to obtain ground-truth annotations of physical activities. We trained a classifier on free-living data and compare it to a classifier trained on prescribed activities. The classifier predicts five activity classes: bicycling, riding in a vehicle, sitting, standing, and walking/running. When testing on free-living data, classifiers trained on free-living data significantly outperform those trained on a controlled dataset (89.2% vs. 70.9% accuracy).","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"48 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113942053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Hipp, Deepti Adlakha, R. Gernes, A. Kargol, Robert Pless
The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.
许多户外场景档案已经拍摄了4亿张照片。这些摄像机和图像中的许多都是十字路口的,其中一部分在过去七年中经历了建筑环境的改善。我们确定了华盛顿特区的六台摄像机,并将每台摄像机在建筑环境变化之前(2007年)和之后(2010年)拍摄的120张照片上传到众包网站Amazon Mechanical Turk (n=1,440)。五个独特的MTurk工作人员对每张图像进行注释,计算行人、骑自行车的人和车辆的数量。两名训练有素的研究助理完成了同样的任务。MTurk工作人员的信度和效度统计显示,在注释捕获的行人和车辆图像方面存在实质性的一致性。使用四个MTurk工人的平均注释证明了最节省的有效结果。众包被证明是一种可靠和有效的劳动力,用于注释户外人类行为的图像。
{"title":"Do you see what I see: crowdsource annotation of captured scenes","authors":"J. Hipp, Deepti Adlakha, R. Gernes, A. Kargol, Robert Pless","doi":"10.1145/2526667.2526671","DOIUrl":"https://doi.org/10.1145/2526667.2526671","url":null,"abstract":"The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"51 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113970442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Wilson, Derek Jones, P. Schofield, Denis Martin
Chronic pain often interferes with daily living. This study aimed to explore day-to-day patterns of functioning and experiences of older adults living with chronic pain. Thirteen older adults (65+ years) living with chronic pain (pain lasting >3 months) took part in the study. Four data collection techniques were used to gather information on various aspects of daily living. Participants were asked to wear a Sensecam, a LifeShirt, as well as complete a daily diary for seven days. Participants also took part in a semi-structured interview. Themes were developed, based on the images, to explain the effect of chronic pain on the participants' functioning. The Sensecam allowed novel data to be gathered increasing knowledge of the daily functioning of older adults living with chronic pain.
{"title":"The use of the Sensecam to explore daily functioning of older adults with chronic pain","authors":"G. Wilson, Derek Jones, P. Schofield, Denis Martin","doi":"10.1145/2526667.2526679","DOIUrl":"https://doi.org/10.1145/2526667.2526679","url":null,"abstract":"Chronic pain often interferes with daily living. This study aimed to explore day-to-day patterns of functioning and experiences of older adults living with chronic pain. Thirteen older adults (65+ years) living with chronic pain (pain lasting >3 months) took part in the study. Four data collection techniques were used to gather information on various aspects of daily living. Participants were asked to wear a Sensecam, a LifeShirt, as well as complete a daily diary for seven days. Participants also took part in a semi-structured interview. Themes were developed, based on the images, to explain the effect of chronic pain on the participants' functioning. The Sensecam allowed novel data to be gathered increasing knowledge of the daily functioning of older adults living with chronic pain.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122049823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual's eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.
{"title":"Feasibility of identifying eating moments from first-person images leveraging human computation","authors":"Edison Thomaz, Aman Parnami, Irfan Essa, G. Abowd","doi":"10.1145/2526667.2526672","DOIUrl":"https://doi.org/10.1145/2526667.2526672","url":null,"abstract":"There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual's eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115830613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael S. Lam, S. Godbole, Jacqueline Chen, M. Oliver, H. Badland, S. Marshall, P. Kelly, C. Foster, A. Doherty, J. Kerr
Numerous studies have demonstrated multiple health benefits of being outside and exposure to natural environments. It is essential to accurately measure the amount of time individuals spend outdoors to assess the impact of exposure to outdoor time on health. SenseCam is a wearable camera that automatically captures images. The annotated images provide an objective criterion for determining amount of time spent outdoors. In this paper we explored the use of SenseCam and Global Positioning System (GPS) devices to calculate time spent outdoors. We used the annotated SenseCam images to investigate the optimal threshold from the GPS data to best differentiate outdoor and indoor time. We analyzed the signal strength data recorded by the GPS with a Receiver Operating Characteristic (ROC) curve as well as a three-category logistic regression model. The ROC curve resulted in 79.4% sensitivity for indoor time and 84.1% specificity for outdoor time with an area under the curve of 0.927.
{"title":"Measuring time spent outdoors using a wearable camera and GPS","authors":"Michael S. Lam, S. Godbole, Jacqueline Chen, M. Oliver, H. Badland, S. Marshall, P. Kelly, C. Foster, A. Doherty, J. Kerr","doi":"10.1145/2526667.2526668","DOIUrl":"https://doi.org/10.1145/2526667.2526668","url":null,"abstract":"Numerous studies have demonstrated multiple health benefits of being outside and exposure to natural environments. It is essential to accurately measure the amount of time individuals spend outdoors to assess the impact of exposure to outdoor time on health. SenseCam is a wearable camera that automatically captures images. The annotated images provide an objective criterion for determining amount of time spent outdoors. In this paper we explored the use of SenseCam and Global Positioning System (GPS) devices to calculate time spent outdoors. We used the annotated SenseCam images to investigate the optimal threshold from the GPS data to best differentiate outdoor and indoor time. We analyzed the signal strength data recorded by the GPS with a Receiver Operating Characteristic (ROC) curve as well as a three-category logistic regression model. The ROC curve resulted in 79.4% sensitivity for indoor time and 84.1% specificity for outdoor time with an area under the curve of 0.927.","PeriodicalId":124821,"journal":{"name":"International SenseCam & Pervasive Imaging Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}