Armando Rivero, Gianluca Costante, E. Bonizzoni, A. Puiatti, Anna Förster
The Zigbee low-power communication standard has established itself as one of the most important wireless standards, enabling thousands of industrial and environmental monitoring applications. At the same time, Bluetooth and newly also Bluetooth Low Energy has captured the gadget and smartphone markets and currently enables various health and personal applications. The border between these two markets becomes thinner and applications would profit significantly from interconnecting these two standards and sharing the information obtained. We will demonstrate our custom designed device BLupZi, which interconnects the worlds of Bluetooth Low Energy and Zigbee. It can be configured to stream all data from one of the networks to the other or to filter particular packet types or source IDs. We will present two examples with two different types of Zigbee sensor nodes and a smartphone.
{"title":"Interconnecting zigbee and bluetooth networks with BLupZi","authors":"Armando Rivero, Gianluca Costante, E. Bonizzoni, A. Puiatti, Anna Förster","doi":"10.1145/2668332.2668373","DOIUrl":"https://doi.org/10.1145/2668332.2668373","url":null,"abstract":"The Zigbee low-power communication standard has established itself as one of the most important wireless standards, enabling thousands of industrial and environmental monitoring applications. At the same time, Bluetooth and newly also Bluetooth Low Energy has captured the gadget and smartphone markets and currently enables various health and personal applications. The border between these two markets becomes thinner and applications would profit significantly from interconnecting these two standards and sharing the information obtained. We will demonstrate our custom designed device BLupZi, which interconnects the worlds of Bluetooth Low Energy and Zigbee. It can be configured to stream all data from one of the networks to the other or to filter particular packet types or source IDs. We will present two examples with two different types of Zigbee sensor nodes and a smartphone.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114447761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Renner, Benjamin Meyer, Daniel Bimschas, Alexander Gabrecht, Sebastian Ebers, T. Tosik, Ammar Amory, E. Maehle, Stefan Fischer
Many underwater monitoring tasks, such as submarine life studies and pipeline inspections, are usually performed manually. Automated underwater monitoring has the potential to increase safety, improve timeliness, and decrease costs. We propose a hybrid solution of stationary sensor buoys and swarms of autonomous underwater vehicles (AUV) and report on our current progress of its realization. Our solution is based on sensor network technology and a small mobile underwater robot developed in our institute.
{"title":"Hybrid underwater environmental monitoring","authors":"C. Renner, Benjamin Meyer, Daniel Bimschas, Alexander Gabrecht, Sebastian Ebers, T. Tosik, Ammar Amory, E. Maehle, Stefan Fischer","doi":"10.1145/2668332.2668354","DOIUrl":"https://doi.org/10.1145/2668332.2668354","url":null,"abstract":"Many underwater monitoring tasks, such as submarine life studies and pipeline inspections, are usually performed manually. Automated underwater monitoring has the potential to increase safety, improve timeliness, and decrease costs. We propose a hybrid solution of stationary sensor buoys and swarms of autonomous underwater vehicles (AUV) and report on our current progress of its realization. Our solution is based on sensor network technology and a small mobile underwater robot developed in our institute.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130497559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Q. Yang, Ge Peng, David T. Nguyen, Xin Qi, Gang Zhou, Zdenka Sitova, Paolo Gasti, K. Balagani
Continuous authentication modalities allow a device to authenticate users transparently without interrupting them or requiring their attention. This is especially important on smartphones, which are more prone to be lost or stolen than regular computers, and carry plenty of sensitive information. There is a multitude of signals that can be harnessed for continuous authentication on mobile devices, such as touch input, accelerometer, and gyroscope, etc. However, existing public datasets include only a handful of them, limiting the ability to do experiments that involve multiple modalities. To fill this gap, we performed a large-scale user study to collect a wide spectrum of signals on smartphones. Our dataset combines more modalities than existing datasets, including movement, orientation, touch, gestures, and pausality. This dataset has been used to evaluate our new behavioral modality named Hand Movement, Orientation, and Grasp (H-MOG). This poster reports on the data collection process and outcomes, as well as preliminary authentication results.
{"title":"A multimodal data set for evaluating continuous authentication performance in smartphones","authors":"Q. Yang, Ge Peng, David T. Nguyen, Xin Qi, Gang Zhou, Zdenka Sitova, Paolo Gasti, K. Balagani","doi":"10.1145/2668332.2668366","DOIUrl":"https://doi.org/10.1145/2668332.2668366","url":null,"abstract":"Continuous authentication modalities allow a device to authenticate users transparently without interrupting them or requiring their attention. This is especially important on smartphones, which are more prone to be lost or stolen than regular computers, and carry plenty of sensitive information. There is a multitude of signals that can be harnessed for continuous authentication on mobile devices, such as touch input, accelerometer, and gyroscope, etc. However, existing public datasets include only a handful of them, limiting the ability to do experiments that involve multiple modalities. To fill this gap, we performed a large-scale user study to collect a wide spectrum of signals on smartphones. Our dataset combines more modalities than existing datasets, including movement, orientation, touch, gestures, and pausality. This dataset has been used to evaluate our new behavioral modality named Hand Movement, Orientation, and Grasp (H-MOG). This poster reports on the data collection process and outcomes, as well as preliminary authentication results.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125664806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valentin Radu, P. Katsikouli, Rik Sarkar, M. Marina
The environmental context of a mobile device determines how it is used and how the device can optimize operations for greater efficiency and usability. We consider the problem of detecting if a device is indoor or outdoor. Towards this end, we present a general method employing semi-supervised machine learning and using only the lightweight sensors on a smartphone. We find that a particular semi-supervised learning method called co-training, when suitably engineered, is most effective. It is able to automatically learn characteristics of new environments and devices, and thereby provides a detection accuracy exceeding 90% even in unfamiliar circumstances. It can learn and adapt online, in real time, at modest computational costs. Thus the method is suitable for on-device learning. Implementation of the indoor-outdoor detection service based on our method is lightweight in energy use -- it can sleep when not in use and does not need to track the device state continuously. It is shown to outperform existing indoor-outdoor detection techniques that rely on static algorithms or GPS, in terms of both accuracy and energy-efficiency.
{"title":"A semi-supervised learning approach for robust indoor-outdoor detection with smartphones","authors":"Valentin Radu, P. Katsikouli, Rik Sarkar, M. Marina","doi":"10.1145/2668332.2668347","DOIUrl":"https://doi.org/10.1145/2668332.2668347","url":null,"abstract":"The environmental context of a mobile device determines how it is used and how the device can optimize operations for greater efficiency and usability. We consider the problem of detecting if a device is indoor or outdoor. Towards this end, we present a general method employing semi-supervised machine learning and using only the lightweight sensors on a smartphone. We find that a particular semi-supervised learning method called co-training, when suitably engineered, is most effective. It is able to automatically learn characteristics of new environments and devices, and thereby provides a detection accuracy exceeding 90% even in unfamiliar circumstances. It can learn and adapt online, in real time, at modest computational costs. Thus the method is suitable for on-device learning. Implementation of the indoor-outdoor detection service based on our method is lightweight in energy use -- it can sleep when not in use and does not need to track the device state continuously. It is shown to outperform existing indoor-outdoor detection techniques that rely on static algorithms or GPS, in terms of both accuracy and energy-efficiency.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Mottola, Mattia Moretta, K. Whitehouse, C. Ghezzi
Autonomous drones are a powerful new breed of mobile sensing platform that can greatly extend the capabilities of traditional sensing systems. Unfortunately, it is still non-trivial to coordinate multiple drones to perform a task collaboratively. We present a novel programming model called team-level programming that can express collaborative sensing tasks without exposing the complexity of managing multiple drones, such as concurrent programming, parallel execution, scaling, and failure recovering. We create the Voltron programming system to explore the concept of team-level programming in active sensing applications. Voltron offers programming constructs to create the illusion of a simple sequential execution model while still maximizing opportunities to dynamically re-task the drones as needed. We implement Voltron by targeting a popular aerial drone platform, and evaluate the resulting system using a combination of real deployments, user studies, and emulation. Our results indicate that Voltron enables simpler code and produces marginal overhead in terms of CPU, memory, and network utilization. In addition, it greatly facilitates implementing correct and complete collaborative drone applications, compared to existing drone programming systems.
{"title":"Team-level programming of drone sensor networks","authors":"L. Mottola, Mattia Moretta, K. Whitehouse, C. Ghezzi","doi":"10.1145/2668332.2668353","DOIUrl":"https://doi.org/10.1145/2668332.2668353","url":null,"abstract":"Autonomous drones are a powerful new breed of mobile sensing platform that can greatly extend the capabilities of traditional sensing systems. Unfortunately, it is still non-trivial to coordinate multiple drones to perform a task collaboratively. We present a novel programming model called team-level programming that can express collaborative sensing tasks without exposing the complexity of managing multiple drones, such as concurrent programming, parallel execution, scaling, and failure recovering. We create the Voltron programming system to explore the concept of team-level programming in active sensing applications. Voltron offers programming constructs to create the illusion of a simple sequential execution model while still maximizing opportunities to dynamically re-task the drones as needed. We implement Voltron by targeting a popular aerial drone platform, and evaluate the resulting system using a combination of real deployments, user studies, and emulation. Our results indicate that Voltron enables simpler code and produces marginal overhead in terms of CPU, memory, and network utilization. In addition, it greatly facilitates implementing correct and complete collaborative drone applications, compared to existing drone programming systems.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133809267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Pannuto, Michael P. Andersen, Tom Bauer, Bradford Campbell, A. Levy, D. Culler, P. Levis, P. Dutta
For the last fifteen years, research explored the hardware, software, sensing, communication abstractions, languages, and protocols that could make networks of small, embedded devices---motes---sample and report data for long periods of time unattended. Today, the application and technological landscapes have shifted, introducing new requirements and new capabilities. Hardware has evolved past 8 and 16 bit microcontrollers: there are now 32 bit processors with lower energy budgets and greater computing capability. New wireless link layers have emerged, creating protocols that support rapid and efficient setup and teardown but introduce novel limitations that systems must consider. The time has come to look beyond optimizing networks of motes. We look towards new technologies such as Bluetooth Low Energy, Cortex M processors, and capable energy harvesting, with new application spaces such as personal area networks, and new capabilities and requirements in security and privacy to inform contemporary hardware and software platforms. It is time for a new, open experimental platform in this post-mote era.
{"title":"A networked embedded system platform for the post-mote era","authors":"P. Pannuto, Michael P. Andersen, Tom Bauer, Bradford Campbell, A. Levy, D. Culler, P. Levis, P. Dutta","doi":"10.1145/2668332.2668364","DOIUrl":"https://doi.org/10.1145/2668332.2668364","url":null,"abstract":"For the last fifteen years, research explored the hardware, software, sensing, communication abstractions, languages, and protocols that could make networks of small, embedded devices---motes---sample and report data for long periods of time unattended. Today, the application and technological landscapes have shifted, introducing new requirements and new capabilities. Hardware has evolved past 8 and 16 bit microcontrollers: there are now 32 bit processors with lower energy budgets and greater computing capability. New wireless link layers have emerged, creating protocols that support rapid and efficient setup and teardown but introduce novel limitations that systems must consider. The time has come to look beyond optimizing networks of motes. We look towards new technologies such as Bluetooth Low Energy, Cortex M processors, and capable energy harvesting, with new application spaces such as personal area networks, and new capabilities and requirements in security and privacy to inform contemporary hardware and software platforms. It is time for a new, open experimental platform in this post-mote era.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115779920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Sankaran, Minhui Zhu, Xiangfa Guo, A. Ananda, M. Chan, L. Peh
Accelerometer is the predominant sensor used for low-power context detection on smartphones. Although low-power, accelerometer is orientation and position-dependent, requires a high sampling rate, and subsequently complex processing and training to achieve good accuracy. We present an alternative approach for context detection using only the smartphone's barometer, a relatively new sensor now present in an increasing number of devices. The barometer is independent of phone position and orientation. Using a low sampling rate of 1 Hz, and simple processing based on intuitive logic, we demonstrate that it is possible to use the barometer for detecting the basic user activities of IDLE, WALKING, and VEHICLE at extremely low-power. We evaluate our approach using 47 hours of real-world transportation traces from 3 countries and 13 individuals, as well as more than 900 km of elevation data pulled from Google Maps from 5 cities, comparing power and accuracy to Google's accelerometer-based Activity Recognition algorithm, and to Future Urban Mobility Survey's (FMS) GPS-accelerometer server-based application. Our barometer-based approach uses 32 mW lower power compared to Google, and has comparable accuracy to both Google and FMS. This is the first paper that uses only the barometer for context detection.
{"title":"Using mobile phone barometer for low-power transportation context detection","authors":"K. Sankaran, Minhui Zhu, Xiangfa Guo, A. Ananda, M. Chan, L. Peh","doi":"10.1145/2668332.2668343","DOIUrl":"https://doi.org/10.1145/2668332.2668343","url":null,"abstract":"Accelerometer is the predominant sensor used for low-power context detection on smartphones. Although low-power, accelerometer is orientation and position-dependent, requires a high sampling rate, and subsequently complex processing and training to achieve good accuracy. We present an alternative approach for context detection using only the smartphone's barometer, a relatively new sensor now present in an increasing number of devices. The barometer is independent of phone position and orientation. Using a low sampling rate of 1 Hz, and simple processing based on intuitive logic, we demonstrate that it is possible to use the barometer for detecting the basic user activities of IDLE, WALKING, and VEHICLE at extremely low-power. We evaluate our approach using 47 hours of real-world transportation traces from 3 countries and 13 individuals, as well as more than 900 km of elevation data pulled from Google Maps from 5 cities, comparing power and accuracy to Google's accelerometer-based Activity Recognition algorithm, and to Future Urban Mobility Survey's (FMS) GPS-accelerometer server-based application. Our barometer-based approach uses 32 mW lower power compared to Google, and has comparable accuracy to both Google and FMS. This is the first paper that uses only the barometer for context detection.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117335801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Fürst, Gabe Fierro, Philippe Bonnet, D. Culler
In this demonstration, we present a novel system of building control and simulation focused on the integration of the physical and virtual worlds. Actuations and schedules can be manifested either in a physical space or in a virtualization of that space, allowing for more natural interactions with simulations and easier transferring of schedules and configurations from the simulated virtual environment to a real-world deployment. We provide an implementation using a widely used game engine (Unity 3D) and sMAP (Simple Measurement and Actuation Profile), a developed time series database and metadata store.
{"title":"BUSICO 3D: building simulation and control in unity 3D","authors":"Jonathan Fürst, Gabe Fierro, Philippe Bonnet, D. Culler","doi":"10.1145/2668332.2668380","DOIUrl":"https://doi.org/10.1145/2668332.2668380","url":null,"abstract":"In this demonstration, we present a novel system of building control and simulation focused on the integration of the physical and virtual worlds. Actuations and schedules can be manifested either in a physical space or in a virtualization of that space, allowing for more natural interactions with simulations and easier transferring of schedules and configurations from the simulated virtual environment to a real-world deployment. We provide an implementation using a widely used game engine (Unity 3D) and sMAP (Simple Measurement and Actuation Profile), a developed time series database and metadata store.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123945151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Bo, G. Shen, Jie Liu, Xiangyang Li, Yongguang Zhang, Feng Zhao
The ever increasing popularity of social networks and the ever easier photo taking and sharing experience have led to unprecedented concerns on privacy infringement. Inspired by the fact that the Robot Exclusion Protocol, which regulates web crawlers' behavior according a per-site deployed robots.txt, and cooperative practices of major search service providers, have contributed to a healthy web search industry, in this paper, we propose Privacy Expressing and Respecting Protocol (PERP) that consists of a Privacy.tag -- a physical tag that enables a user to explicitly and flexibly express their privacy deal, and Privacy Respecting Sharing Protocol (PRSP) -- a protocol that empowers the photo service provider to exert privacy protection following users' policy expressions, to mitigate the public's privacy concern, and ultimately create a healthy photo-sharing ecosystem in the long run. We further design an exemplar Privacy.Tag using customized yet compatible QR-code, and implement the Protocol and study the technical feasibility of our proposal. Our evaluation results confirm that PERP and PRSP are indeed feasible and incur negligible computation overhead.
社交网络的日益普及以及拍照和分享体验的日益便捷,引发了对隐私侵犯的前所未有的担忧。鉴于机器人排除协议(Robot Exclusion Protocol)根据每个站点部署的robots.txt规范网络爬虫的行为,以及主要搜索服务提供商的合作实践,为健康的网络搜索行业做出了贡献,本文提出了由隐私表达和尊重协议(Privacy expression and respect Protocol, PERP)组成的隐私表达和尊重协议。标签——一种可以让用户明确灵活地表达其隐私交易的物理标签,以及隐私尊重共享协议(privacy尊重共享协议,PRSP)——一种允许照片服务提供商根据用户的政策表达来实施隐私保护的协议,以减轻公众对隐私的担忧,最终从长远来看创造一个健康的照片共享生态系统。我们进一步设计了一个范例隐私。标签采用定制且兼容的qr码,并实现了协议,研究了本方案的技术可行性。我们的评估结果证实,PERP和PRSP确实是可行的,并且产生的计算开销可以忽略不计。
{"title":"Privacy.tag: privacy concern expressed and respected","authors":"Cheng Bo, G. Shen, Jie Liu, Xiangyang Li, Yongguang Zhang, Feng Zhao","doi":"10.1145/2668332.2668339","DOIUrl":"https://doi.org/10.1145/2668332.2668339","url":null,"abstract":"The ever increasing popularity of social networks and the ever easier photo taking and sharing experience have led to unprecedented concerns on privacy infringement. Inspired by the fact that the Robot Exclusion Protocol, which regulates web crawlers' behavior according a per-site deployed robots.txt, and cooperative practices of major search service providers, have contributed to a healthy web search industry, in this paper, we propose Privacy Expressing and Respecting Protocol (PERP) that consists of a Privacy.tag -- a physical tag that enables a user to explicitly and flexibly express their privacy deal, and Privacy Respecting Sharing Protocol (PRSP) -- a protocol that empowers the photo service provider to exert privacy protection following users' policy expressions, to mitigate the public's privacy concern, and ultimately create a healthy photo-sharing ecosystem in the long run. We further design an exemplar Privacy.Tag using customized yet compatible QR-code, and implement the Protocol and study the technical feasibility of our proposal. Our evaluation results confirm that PERP and PRSP are indeed feasible and incur negligible computation overhead.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131322457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuangjiang Li, Rui Guo, Li He, Wei Gao, H. Qi, Gina P. Owens
In this demo, we present MoodMagician, a pervasive and unobtrusive mobile phone system for inferring human emotions through the recording, processing, and analysis of the real-time streaming Galvanic Skin Response (GSR) signal from human bodies. Being different from traditional multimodal emotion sensing systems which rely on data from multiple sensing sources and may hence interfere with people's daily life, our proposed system is able to detect various categories of human emotions using single GSR signal, which is captured by compact and wearable mobile sensing devices in an unobtrusive fashion. The proposed system has been evaluated by well-designed practical experiments to recognize human emotions. The recognition accuracy of each emotion can be up to 70% through the development of effective preprocessing algorithms and the extraction of representative features from the GSR signals.
{"title":"MoodMagician: a pervasive and unobtrusive emotion sensing system using mobile phones for improving human mental health","authors":"Shuangjiang Li, Rui Guo, Li He, Wei Gao, H. Qi, Gina P. Owens","doi":"10.1145/2668332.2668371","DOIUrl":"https://doi.org/10.1145/2668332.2668371","url":null,"abstract":"In this demo, we present MoodMagician, a pervasive and unobtrusive mobile phone system for inferring human emotions through the recording, processing, and analysis of the real-time streaming Galvanic Skin Response (GSR) signal from human bodies. Being different from traditional multimodal emotion sensing systems which rely on data from multiple sensing sources and may hence interfere with people's daily life, our proposed system is able to detect various categories of human emotions using single GSR signal, which is captured by compact and wearable mobile sensing devices in an unobtrusive fashion. The proposed system has been evaluated by well-designed practical experiments to recognize human emotions. The recognition accuracy of each emotion can be up to 70% through the development of effective preprocessing algorithms and the extraction of representative features from the GSR signals.","PeriodicalId":223777,"journal":{"name":"Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125467265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}