Sven Gehring, E. Hartz, Markus Löchtefeld, A. Krüger
Digital technologies are rapidly finding their way into urban spaces. One prominent example is media façades. Due to their size, visibility and their technical capabilities, they offer great potential for interaction and for becoming the future displays of public spaces. To explore their potential, researchers have recently started to develop interactive applications for various media façades. Existing development tools are mostly tailored to one specific media façade in one specific setting. They usually provide limited means to incorporate interaction by a user, and the applications developed are limited to running on only one particular media façade. In this paper, we present a flexible, generalized media façade toolkit, which is capable of mimicking arbitrary media façade installations. The toolkit is capable of running interactive applications on media façades with different form factors, sizes and technical capabilities. Furthermore, it ensures application portability between different media façades and offers the possibility of providing interactivity by enabling user input with different modalities and different interaction devices.
{"title":"The media façade toolkit: prototyping and simulating interaction with media façades","authors":"Sven Gehring, E. Hartz, Markus Löchtefeld, A. Krüger","doi":"10.1145/2493432.2493471","DOIUrl":"https://doi.org/10.1145/2493432.2493471","url":null,"abstract":"Digital technologies are rapidly finding their way into urban spaces. One prominent example is media façades. Due to their size, visibility and their technical capabilities, they offer great potential for interaction and for becoming the future displays of public spaces. To explore their potential, researchers have recently started to develop interactive applications for various media façades. Existing development tools are mostly tailored to one specific media façade in one specific setting. They usually provide limited means to incorporate interaction by a user, and the applications developed are limited to running on only one particular media façade. In this paper, we present a flexible, generalized media façade toolkit, which is capable of mimicking arbitrary media façade installations. The toolkit is capable of running interactive applications on media façades with different form factors, sizes and technical capabilities. Furthermore, it ensures application portability between different media façades and offers the possibility of providing interactivity by enabling user input with different modalities and different interaction devices.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127127381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Gjoreski, Bostjan Kaluza, M. Gams, R. Milić, M. Luštrek
Monitoring human energy expenditure is important in many health and sport applications, since the energy expenditure directly reflects the level of physical activity. The actual energy expenditure is unpractical to measure; hence, the field aims at estimating it by measuring the physical activity with accelerometers and other sensors. Current advanced estimators use a context-dependent approach in which a different regression model is invoked for different activities of the user. In this paper, we go a step further and use multiple contexts corresponding to multiple sensors, resulting in an ensemble of models for energy expenditure estimation. This provides a multi-view perspective, which leads to a better estimation of the energy. The proposed method was experimentally evaluated on a comprehensive set of activities where it outperformed the current state-of-the-art.
{"title":"Ensembles of multiple sensors for human energy expenditure estimation","authors":"H. Gjoreski, Bostjan Kaluza, M. Gams, R. Milić, M. Luštrek","doi":"10.1145/2493432.2493517","DOIUrl":"https://doi.org/10.1145/2493432.2493517","url":null,"abstract":"Monitoring human energy expenditure is important in many health and sport applications, since the energy expenditure directly reflects the level of physical activity. The actual energy expenditure is unpractical to measure; hence, the field aims at estimating it by measuring the physical activity with accelerometers and other sensors. Current advanced estimators use a context-dependent approach in which a different regression model is invoked for different activities of the user. In this paper, we go a step further and use multiple contexts corresponding to multiple sensors, resulting in an ensemble of models for energy expenditure estimation. This provides a multi-view perspective, which leads to a better estimation of the energy. The proposed method was experimentally evaluated on a comprehensive set of activities where it outperformed the current state-of-the-art.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"07 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127189251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sen H. Hirano, Jed R. Brubaker, Donald J. Patterson, Gillian R. Hayes
Gas sensors have the potential to assist cooking by providing feedback on the cooking process and by further automating cooking. In this work, we explored the potential use of gas sensors to monitor food during the cooking process. Focusing on dry cooking, we collected gas emissions using 13 sensors during trials in which food was cooked to various degrees of doneness. Using decision tree classifiers, we were able to predict doneness for waffles and popcorn with 73% and 85% accuracy, respectively. We reflect on the potential reasons for this variation and the ways in which gas sensors might reliably be used in ubicomp applications to support cooking.
{"title":"Detecting cooking state with gas sensors during dry cooking","authors":"Sen H. Hirano, Jed R. Brubaker, Donald J. Patterson, Gillian R. Hayes","doi":"10.1145/2493432.2493523","DOIUrl":"https://doi.org/10.1145/2493432.2493523","url":null,"abstract":"Gas sensors have the potential to assist cooking by providing feedback on the cooking process and by further automating cooking. In this work, we explored the potential use of gas sensors to monitor food during the cooking process. Focusing on dry cooking, we collected gas emissions using 13 sensors during trials in which food was cooked to various degrees of doneness. Using decision tree classifiers, we were able to predict doneness for waffles and popcorn with 73% and 85% accuracy, respectively. We reflect on the potential reasons for this variation and the ways in which gas sensors might reliably be used in ubicomp applications to support cooking.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126149804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although gaze is an attractive modality for pervasive interactions, the real-world implementation of eye-based interfaces poses significant challenges, such as calibration. We present Pursuits, an innovative interaction technique that enables truly spontaneous interaction with eye-based interfaces. A user can simply walk up to the screen and readily interact with moving targets. Instead of being based on gaze location, Pursuits correlates eye pursuit movements with objects dynamically moving on the interface. We evaluate the influence of target speed, number and trajectory and develop guidelines for designing Pursuits-based interfaces. We then describe six realistic usage scenarios and implement three of them to evaluate the method in a usability study and a field study. Our results show that Pursuits is a versatile and robust technique and that users can interact with Pursuits-based interfaces without prior knowledge or preparation phase.
{"title":"Pursuits: spontaneous interaction with displays based on smooth pursuit eye movement and moving targets","authors":"Mélodie Vidal, A. Bulling, Hans-Werner Gellersen","doi":"10.1145/2493432.2493477","DOIUrl":"https://doi.org/10.1145/2493432.2493477","url":null,"abstract":"Although gaze is an attractive modality for pervasive interactions, the real-world implementation of eye-based interfaces poses significant challenges, such as calibration. We present Pursuits, an innovative interaction technique that enables truly spontaneous interaction with eye-based interfaces. A user can simply walk up to the screen and readily interact with moving targets. Instead of being based on gaze location, Pursuits correlates eye pursuit movements with objects dynamically moving on the interface. We evaluate the influence of target speed, number and trajectory and develop guidelines for designing Pursuits-based interfaces. We then describe six realistic usage scenarios and implement three of them to evaluate the method in a usability study and a field study. Our results show that Pursuits is a versatile and robust technique and that users can interact with Pursuits-based interfaces without prior knowledge or preparation phase.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128295825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
UniPad is a face-to-face, digital simulation for use in classroom settings that runs on shared tablets and a wall display. The goal is to encourage students to talk, collaborate and make decisions together in real-time, by switching between working on shared 'small group' devices and a 'whole classroom' public display - instead of working by themselves using their own device. It is intended to improve peer discussion and teacher involvement by focusing and constraining shared attention at different stages of an activity. The domain for this study is finance management. The system was designed using an iterative, participatory design method with expert finance educators and then trialed using an in-the-wild study at a school. The findings show how the set-up helped in facilitating verbal participation in the classroom. We discuss how lightweight, multi-device shared technology systems, such as UniPad, can be designed and used for a range of classroom activities.
{"title":"UniPad: orchestrating collaborative activities through shared tablets and an integrated wall display","authors":"S. Kreitmayer, Y. Rogers, R. Laney, S. Peake","doi":"10.1145/2493432.2493506","DOIUrl":"https://doi.org/10.1145/2493432.2493506","url":null,"abstract":"UniPad is a face-to-face, digital simulation for use in classroom settings that runs on shared tablets and a wall display. The goal is to encourage students to talk, collaborate and make decisions together in real-time, by switching between working on shared 'small group' devices and a 'whole classroom' public display - instead of working by themselves using their own device. It is intended to improve peer discussion and teacher involvement by focusing and constraining shared attention at different stages of an activity. The domain for this study is finance management. The system was designed using an iterative, participatory design method with expert finance educators and then trialed using an in-the-wild study at a school. The findings show how the set-up helped in facilitating verbal participation in the classroom. We discuss how lightweight, multi-device shared technology systems, such as UniPad, can be designed and used for a range of classroom activities.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115852548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jens Nickels, Pascal Knierim, Bastian Könings, F. Schaub, Björn Wiedersheim, S. Musiol, M. Weber
Searching for misplaced keys, phones, or wallets is a common nuisance. Find My Stuff (FiMS) provides search support for physical objects inside furniture, on room level, and in multiple locations, e.g., home and office. Stuff tags make objects searchable while all other localization components are integrated into furniture. FiMS requires minimal configuration and automatically adapts to the user's furniture arrangement. Object search is supported with relative position cues, such as "phone is inside top drawer" or "the wallet is between couch and table," which do not require exact object localization. Functional evaluation of our prototype shows the approach's practicality with sufficient accuracy in realistic environments and low energy consumption. We also conducted two user studies, which showed that objects can be retrieved significantly faster with FiMS than manual search and that our relative position cues provide better support than map-based cues. Combined with audiovisual feedback, FiMS also outperforms spotlight-based cues.
寻找放错地方的钥匙、手机或钱包是一件常见的麻烦事。Find My Stuff (FiMS)为家具内的物理对象、房间级和多个位置(例如,家庭和办公室)提供搜索支持。Stuff标签使对象可搜索,而所有其他本地化组件都集成到家具中。FiMS需要最小的配置,并自动适应用户的家具布置。对象搜索支持相对位置线索,例如“电话在最上面的抽屉里”或“钱包在沙发和桌子之间”,这些线索不需要精确的对象定位。原型的功能评估表明了该方法的实用性,在现实环境中具有足够的精度和低能耗。我们还进行了两项用户研究,结果表明,与手动搜索相比,FiMS检索对象的速度要快得多,而我们的相对位置线索比基于地图的线索提供了更好的支持。与视听反馈相结合,FiMS的表现也优于基于聚光灯的线索。
{"title":"Find my stuff: supporting physical objects search with relative positioning","authors":"Jens Nickels, Pascal Knierim, Bastian Könings, F. Schaub, Björn Wiedersheim, S. Musiol, M. Weber","doi":"10.1145/2493432.2493447","DOIUrl":"https://doi.org/10.1145/2493432.2493447","url":null,"abstract":"Searching for misplaced keys, phones, or wallets is a common nuisance. Find My Stuff (FiMS) provides search support for physical objects inside furniture, on room level, and in multiple locations, e.g., home and office. Stuff tags make objects searchable while all other localization components are integrated into furniture. FiMS requires minimal configuration and automatically adapts to the user's furniture arrangement. Object search is supported with relative position cues, such as \"phone is inside top drawer\" or \"the wallet is between couch and table,\" which do not require exact object localization. Functional evaluation of our prototype shows the approach's practicality with sufficient accuracy in realistic environments and low energy consumption. We also conducted two user studies, which showed that objects can be retrieved significantly faster with FiMS than manual search and that our relative position cues provide better support than map-based cues. Combined with audiovisual feedback, FiMS also outperforms spotlight-based cues.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"71 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131496407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Kawahara, Xiaoying Bian, Ryo Shigeta, R. Vyas, M. Tentzeris, T. Asami
In this paper, we considered the possibility of using electricity harvested from the microwave field leaked from commercial microwave ovens. Our experimental results showed that the leakage received by a dipole antenna was about 0 dBm (1 mW) at a point 5 cm in front of the door. A rectenna consisting of a dipole antenna and charge pump can convert the leaked microwave energy into a DC current. When a microwave oven is operated for 2 min, 9.98 mJ of energy was harvested. We demonstrated that this energy is sufficient for powering a digital cooking timer to count down for 3 min and beep for 2.5 s. The operation of other kitchen devices was also demonstrated.
{"title":"Power harvesting from microwave oven electromagnetic leakage","authors":"Y. Kawahara, Xiaoying Bian, Ryo Shigeta, R. Vyas, M. Tentzeris, T. Asami","doi":"10.1145/2493432.2493500","DOIUrl":"https://doi.org/10.1145/2493432.2493500","url":null,"abstract":"In this paper, we considered the possibility of using electricity harvested from the microwave field leaked from commercial microwave ovens. Our experimental results showed that the leakage received by a dipole antenna was about 0 dBm (1 mW) at a point 5 cm in front of the door. A rectenna consisting of a dipole antenna and charge pump can convert the leaked microwave energy into a DC current. When a microwave oven is operated for 2 min, 9.98 mJ of energy was harvested. We demonstrated that this energy is sufficient for powering a digital cooking timer to count down for 3 min and beep for 2.5 s. The operation of other kitchen devices was also demonstrated.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116916512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding a user's social interactions in the physical world proves important in building context-aware ubiquitous applications. A good way towards that objective is to categorize people to whom a user is socially related into what we call as social circles. In this note, we propose a novel unsupervised approach that learns from the Bluetooth (BT) sensed data recording one's dynamic proximity relations with others to identify her social circles, each of which is formed along a semantically coherent aspect. For each circle we learn its members as well as the temporal dimensions along which it is formed. Our method is innovative in that it well overcomes data sparsity by information sharing, and allows for circle overlaps which is common in reality. Experiments on real data demonstrate the effectiveness of our method, and also show the potentials of relational mobile data in sensing personal behaviors beyond personal data.
{"title":"An unsupervised learning approach to social circles detection in ego bluetooth proximity network","authors":"Jiangchuan Zheng, L. Ni","doi":"10.1145/2493432.2493512","DOIUrl":"https://doi.org/10.1145/2493432.2493512","url":null,"abstract":"Understanding a user's social interactions in the physical world proves important in building context-aware ubiquitous applications. A good way towards that objective is to categorize people to whom a user is socially related into what we call as social circles. In this note, we propose a novel unsupervised approach that learns from the Bluetooth (BT) sensed data recording one's dynamic proximity relations with others to identify her social circles, each of which is formed along a semantically coherent aspect. For each circle we learn its members as well as the temporal dimensions along which it is formed. Our method is innovative in that it well overcomes data sparsity by information sharing, and allows for circle overlaps which is common in reality. Experiments on real data demonstrate the effectiveness of our method, and also show the potentials of relational mobile data in sensing personal behaviors beyond personal data.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129339991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ladha, Nils Y. Hammerla, Emma Hughes, P. Olivier, T. Plötz
Health and well-being of dogs, either domesticated pets or service animals, are major concerns that are taken seriously for ethical, emotional, and financial reasons. Welfare assessments in dogs rely on objective observations of both frequency and variability of individual behaviour traits, which is often difficult to obtain in a dog's everyday life. In this paper we have identified a set of activities, which are linked to behaviour traits that are relevant for a dog's wellbeing. We developed a collar-worn accelerometry platform that records dog behaviours in naturalistic environments. A statistical classification framework is used for recognising dog activities. In an experimental evaluation we analysed the naturalistic behaviour of 18 dogs and were able to recognise a total of 17 different activities with approximately 70% classification accuracy. The presented system is the first of its kind that allows for robust and detailed analysis of dog activities in naturalistic environments.
{"title":"Dog's life: wearable activity recognition for dogs","authors":"C. Ladha, Nils Y. Hammerla, Emma Hughes, P. Olivier, T. Plötz","doi":"10.1145/2493432.2493519","DOIUrl":"https://doi.org/10.1145/2493432.2493519","url":null,"abstract":"Health and well-being of dogs, either domesticated pets or service animals, are major concerns that are taken seriously for ethical, emotional, and financial reasons. Welfare assessments in dogs rely on objective observations of both frequency and variability of individual behaviour traits, which is often difficult to obtain in a dog's everyday life. In this paper we have identified a set of activities, which are linked to behaviour traits that are relevant for a dog's wellbeing. We developed a collar-worn accelerometry platform that records dog behaviours in naturalistic environments. A statistical classification framework is used for recognising dog activities. In an experimental evaluation we analysed the naturalistic behaviour of 18 dogs and were able to recognise a total of 17 different activities with approximately 70% classification accuracy. The presented system is the first of its kind that allows for robust and detailed analysis of dog activities in naturalistic environments.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127418060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesse M. Blum, Martin Flintham, Rachel Jacobs, Victoria Shipp, Genovefa Kefalidou, Michael A. Brown, Derek McAuley
Ubiquitous and pervasive computing techniques have been used to inform discourses around climate change and energy insecurity, traditionally through data capture and representation for scientists, policy makers and the public. Research into re-engaging the public with sustainability and climate change issues reveals the significance of emotional and personal engagement alongside locally meaningful, globally-relevant and data-informed climate messaging for the public. New ubiquitous and pervasive computing techniques are emerging to support the next generation of climate change stakeholders, including artists, community practitioners, educators and data hackers, to create scientific data responsive artworks and performances. Grounded in our experiences of community based artistic interventions, we explore the design and deployments of the Timestreams platform, demonstrating usages of ubiquitous and pervasive computing within these new forms of discourse around climate change and energy insecurity.
{"title":"The timestreams platform: artist mediated participatory sensing for environmental discourse","authors":"Jesse M. Blum, Martin Flintham, Rachel Jacobs, Victoria Shipp, Genovefa Kefalidou, Michael A. Brown, Derek McAuley","doi":"10.1145/2493432.2493479","DOIUrl":"https://doi.org/10.1145/2493432.2493479","url":null,"abstract":"Ubiquitous and pervasive computing techniques have been used to inform discourses around climate change and energy insecurity, traditionally through data capture and representation for scientists, policy makers and the public. Research into re-engaging the public with sustainability and climate change issues reveals the significance of emotional and personal engagement alongside locally meaningful, globally-relevant and data-informed climate messaging for the public. New ubiquitous and pervasive computing techniques are emerging to support the next generation of climate change stakeholders, including artists, community practitioners, educators and data hackers, to create scientific data responsive artworks and performances. Grounded in our experiences of community based artistic interventions, we explore the design and deployments of the Timestreams platform, demonstrating usages of ubiquitous and pervasive computing within these new forms of discourse around climate change and energy insecurity.","PeriodicalId":262104,"journal":{"name":"Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127023215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}