首页 > 最新文献

WearSys '16最新文献

英文 中文
MiHub: Wearable Management for IoT MiHub:物联网可穿戴管理
Pub Date : 2016-06-30 DOI: 10.1145/2935643.2935646
Kirill Varshavskiy, A. Harris, R. Kravets
The availability of Internet of Things (IoT)-enabled devices has resulted in a rapid increase in the number of devices a user carries with them. To support the collection of data from the set of a user's devices as well as interact with the environment around them, we present MiHub, a system architecture that dynamically manages limited resources and redundant services based on the resource availability and the dynamically changing set of a user's personal devices. For any given set of co-located user devices, a MiHub is elected, which then configures the set of services on the devices in its proximity. To provide more reliable data access, MiHub is designed around the use of more expensive primary data sources, supported by low cost secondary services that can quickly take over upon failure of the primary source. We evaluate the basic components of MiHub, highlighting its energy-efficient mechanisms to ensure IoT-enabled service availability in the face of dynamically changing groups of devices in the proximity of the user
支持物联网(IoT)的设备的可用性导致用户随身携带的设备数量迅速增加。为了支持来自用户设备集的数据收集以及与周围环境的交互,我们提出了MiHub,这是一种基于资源可用性和用户个人设备动态变化集动态管理有限资源和冗余服务的系统架构。对于任何给定的一组位于同一位置的用户设备,都会选择一个MiHub,然后该MiHub在其附近的设备上配置一组服务。为了提供更可靠的数据访问,MiHub是围绕使用更昂贵的主数据源而设计的,由低成本的辅助服务支持,这些服务可以在主数据源出现故障时迅速接管。我们评估了MiHub的基本组件,强调了其节能机制,以确保面对用户附近动态变化的设备组时物联网支持的服务可用性
{"title":"MiHub: Wearable Management for IoT","authors":"Kirill Varshavskiy, A. Harris, R. Kravets","doi":"10.1145/2935643.2935646","DOIUrl":"https://doi.org/10.1145/2935643.2935646","url":null,"abstract":"The availability of Internet of Things (IoT)-enabled devices has resulted in a rapid increase in the number of devices a user carries with them. To support the collection of data from the set of a user's devices as well as interact with the environment around them, we present MiHub, a system architecture that dynamically manages limited resources and redundant services based on the resource availability and the dynamically changing set of a user's personal devices.\u0000 For any given set of co-located user devices, a MiHub is elected, which then configures the set of services on the devices in its proximity. To provide more reliable data access, MiHub is designed around the use of more expensive primary data sources, supported by low cost secondary services that can quickly take over upon failure of the primary source. We evaluate the basic components of MiHub, highlighting its energy-efficient mechanisms to ensure IoT-enabled service availability in the face of dynamically changing groups of devices in the proximity of the user","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122318388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Wireless Smartphone Keyboard for Visually Challenged Users 无线智能手机键盘为视障用户
Pub Date : 2016-06-30 DOI: 10.1145/2935643.2935648
Cecil D'silva, Vickram Parthasarathy, Sethuraman N. Rao
A Smartphone is a combination of a cell phone, a personal digital assistant (PDA), a media player, a GPS navigation unit and much more. To a blind and visually impaired (BVI) user, a Smartphone can assist and help connect with our modern day cyber social world. The only inconvenience to the BVI user is its user interface, which is a touch screen display. In this project, an economical user-friendly compact text-input device for a BVI Smartphone user is developed. This device is designed for daily use and it supports Braille text. The device is connected to the smart phone by Bluetooth interface. A Smartphone app pairs the phone with the device and switches the default keyboard service to use the device keyboard. The Smartphone operating system runs the screen reader when the keyboard is in use, giving real time audio feedback to the user. Thus, a setup for nearly seamless text input is provided to the BVI user. A low-cost Arduino based prototype has been developed as a proof of concept. As part of the future work, the prototype keyboard will be evaluated by BVI users and the results will be compared with other existing text input methods for BVI users.
智能手机是手机、个人数字助理(PDA)、媒体播放器、GPS导航装置等的合体。对于盲人和视障(BVI)用户来说,智能手机可以帮助他们与现代网络社交世界建立联系。BVI用户唯一不方便的是它的用户界面,它是一个触摸屏显示。在这个项目中,为英属维尔京群岛智能手机用户开发了一个经济友好的紧凑型文本输入设备。这个设备是为日常使用而设计的,它支持盲文。设备通过蓝牙接口与智能手机连接。智能手机应用程序将手机与设备配对,并将默认键盘服务切换为使用设备键盘。智能手机操作系统在使用键盘时运行屏幕阅读器,向用户提供实时音频反馈。因此,为BVI用户提供了近乎无缝的文本输入设置。一个基于Arduino的低成本原型已经被开发出来作为概念验证。作为未来工作的一部分,原型键盘将由英属维尔京群岛用户进行评估,并将结果与英属维尔京群岛用户现有的其他文本输入法进行比较。
{"title":"Wireless Smartphone Keyboard for Visually Challenged Users","authors":"Cecil D'silva, Vickram Parthasarathy, Sethuraman N. Rao","doi":"10.1145/2935643.2935648","DOIUrl":"https://doi.org/10.1145/2935643.2935648","url":null,"abstract":"A Smartphone is a combination of a cell phone, a personal digital assistant (PDA), a media player, a GPS navigation unit and much more. To a blind and visually impaired (BVI) user, a Smartphone can assist and help connect with our modern day cyber social world. The only inconvenience to the BVI user is its user interface, which is a touch screen display. In this project, an economical user-friendly compact text-input device for a BVI Smartphone user is developed. This device is designed for daily use and it supports Braille text. The device is connected to the smart phone by Bluetooth interface. A Smartphone app pairs the phone with the device and switches the default keyboard service to use the device keyboard. The Smartphone operating system runs the screen reader when the keyboard is in use, giving real time audio feedback to the user. Thus, a setup for nearly seamless text input is provided to the BVI user. A low-cost Arduino based prototype has been developed as a proof of concept. As part of the future work, the prototype keyboard will be evaluated by BVI users and the results will be compared with other existing text input methods for BVI users.","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134128875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
DeepSense: A GPU-based Deep Convolutional Neural Network Framework on Commodity Mobile Devices DeepSense:商用移动设备上基于gpu的深度卷积神经网络框架
Pub Date : 2016-06-30 DOI: 10.1145/2935643.2935650
Huynh Nguyen Loc, R. Balan, Youngki Lee
Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).
最近,机器学习算法的一个分支深度学习获得了广泛关注,以提高各种传感应用的准确性。然而,卷积神经网络等深度学习算法在移动处理器上的执行由于计算量大而不是简单的。在本文中,我们介绍了DeepSense的早期设计-一个基于移动gpu的深度卷积神经网络(CNN)框架。对于其设计,我们首先探讨了服务器级和移动级gpu的差异,并研究了各种优化策略的有效性,如分支分歧消除和内存矢量化。我们的研究结果表明,DeepSense能够在软实时情况下执行各种CNN模型,用于图像识别、目标检测和人脸识别,而没有或只有边际精度权衡。实验还表明,我们的框架可扩展到具有不同GPU架构的多个设备(例如Adreno和Mali)。
{"title":"DeepSense: A GPU-based Deep Convolutional Neural Network Framework on Commodity Mobile Devices","authors":"Huynh Nguyen Loc, R. Balan, Youngki Lee","doi":"10.1145/2935643.2935650","DOIUrl":"https://doi.org/10.1145/2935643.2935650","url":null,"abstract":"Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127809745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
CryptoCoP: Lightweight, Energy-efficient Encryption and Privacy for Wearable Devices CryptoCoP:用于可穿戴设备的轻量级、节能加密和隐私
Pub Date : 2016-06-30 DOI: 10.1145/2935643.2935647
Robin Snader, R. Kravets, A. Harris
As people use and interact with more and more wearables and IoT-enabled devices, their private information is being exposed without any privacy protections. However, the limited capabilities of IoT devices makes implementing robust privacy protections challenging. In response, we present CryptoCoP, an energy-efficient, content agnostic privacy and encryption protocol for IoT devices. Eavesdroppers cannot snoop on data protected by CryptoCoP or track users via their IoT devices. We evaluate CryptoCoP and show that the performance and energy overheads are viable in a wide variety of situations, and can be modified to trade off forward secrecy and energy consumption against required key storage on the device.
随着人们使用越来越多的可穿戴设备和物联网设备并与之互动,他们的私人信息在没有任何隐私保护的情况下被暴露。然而,物联网设备的有限功能使得实现强大的隐私保护具有挑战性。作为回应,我们提出了CryptoCoP,一种用于物联网设备的节能,内容不可知的隐私和加密协议。窃听者无法窥探受CryptoCoP保护的数据或通过其物联网设备跟踪用户。我们对CryptoCoP进行了评估,并表明性能和能源开销在各种情况下都是可行的,并且可以进行修改,以折衷前向保密和能源消耗与设备上所需的密钥存储。
{"title":"CryptoCoP: Lightweight, Energy-efficient Encryption and Privacy for Wearable Devices","authors":"Robin Snader, R. Kravets, A. Harris","doi":"10.1145/2935643.2935647","DOIUrl":"https://doi.org/10.1145/2935643.2935647","url":null,"abstract":"As people use and interact with more and more wearables and IoT-enabled devices, their private information is being exposed without any privacy protections. However, the limited capabilities of IoT devices makes implementing robust privacy protections challenging. In response, we present CryptoCoP, an energy-efficient, content agnostic privacy and encryption protocol for IoT devices. Eavesdroppers cannot snoop on data protected by CryptoCoP or track users via their IoT devices. We evaluate CryptoCoP and show that the performance and energy overheads are viable in a wide variety of situations, and can be modified to trade off forward secrecy and energy consumption against required key storage on the device.","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125370517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
In-ear Biosignal Recording System: A Wearable For Automatic Whole-night Sleep Staging 入耳式生物信号记录系统:一种用于自动整夜睡眠分期的可穿戴设备
Pub Date : 2016-06-30 DOI: 10.1145/2935643.2935649
Anh Nguyen, Raghda Alqurashi, Zohreh Raghebi, F. Kashani, A. Halbower, Thang N. Dinh, Tam N. Vu
In this work, we present a low-cost and light-weight wearable sensing system that can monitor bioelectrical signals generated by electrically active tissues across the brain, the eyes, and the facial muscles from inside human ears. Our work presents two key aspects of the sensing, which include the construction of electrodes and the extraction of these biosignals using a supervised non-negative matrix factorization learning algorithm. To illustrate the usefulness of the system, we developed an autonomous sleep staging system using the output of our proposed in-ear sensing system. We prototyped the device and evaluated its sleep stage classification performance on 8 participants for a period of 1 month. With 94% accuracy on average, the evaluation results show that our wearable sensing system is promising to monitor brain, eyes, and facial muscle signals with reasonable fidelity from human ear canals.
在这项工作中,我们提出了一种低成本、轻重量的可穿戴传感系统,该系统可以监测人类耳朵内部由大脑、眼睛和面部肌肉的电活动组织产生的生物电信号。我们的工作提出了传感的两个关键方面,包括电极的构建和使用有监督的非负矩阵分解学习算法提取这些生物信号。为了说明该系统的实用性,我们利用我们提出的入耳式传感系统的输出开发了一个自主睡眠分期系统。我们制作了该设备的原型,并对8名参与者进行了为期1个月的睡眠阶段分类性能评估。评估结果表明,我们的可穿戴传感系统有望以合理的保真度监测人类耳道的大脑、眼睛和面部肌肉信号,平均准确率为94%。
{"title":"In-ear Biosignal Recording System: A Wearable For Automatic Whole-night Sleep Staging","authors":"Anh Nguyen, Raghda Alqurashi, Zohreh Raghebi, F. Kashani, A. Halbower, Thang N. Dinh, Tam N. Vu","doi":"10.1145/2935643.2935649","DOIUrl":"https://doi.org/10.1145/2935643.2935649","url":null,"abstract":"In this work, we present a low-cost and light-weight wearable sensing system that can monitor bioelectrical signals generated by electrically active tissues across the brain, the eyes, and the facial muscles from inside human ears. Our work presents two key aspects of the sensing, which include the construction of electrodes and the extraction of these biosignals using a supervised non-negative matrix factorization learning algorithm. To illustrate the usefulness of the system, we developed an autonomous sleep staging system using the output of our proposed in-ear sensing system. We prototyped the device and evaluated its sleep stage classification performance on 8 participants for a period of 1 month. With 94% accuracy on average, the evaluation results show that our wearable sensing system is promising to monitor brain, eyes, and facial muscle signals with reasonable fidelity from human ear canals.","PeriodicalId":345713,"journal":{"name":"WearSys '16","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123243862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
WearSys '16
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1