白痴:人类携带的可穿戴设备的普遍识别和分配的多模态框架

IF 3.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Internet of Things Pub Date : 2023-01-12 DOI:10.1145/3579832
Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang
{"title":"白痴:人类携带的可穿戴设备的普遍识别和分配的多模态框架","authors":"Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang","doi":"10.1145/3579832","DOIUrl":null,"url":null,"abstract":"IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"15 1","pages":"1 - 25"},"PeriodicalIF":3.5000,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"IDIoT: Multimodal Framework for Ubiquitous Identification and Assignment of Human-carried Wearable Devices\",\"authors\":\"Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang\",\"doi\":\"10.1145/3579832\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.\",\"PeriodicalId\":29764,\"journal\":{\"name\":\"ACM Transactions on Internet of Things\",\"volume\":\"15 1\",\"pages\":\"1 - 25\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2023-01-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Internet of Things\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579832\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Internet of Things","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579832","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 4

摘要

物联网(Internet of Things)设备,如支持网络的可穿戴设备,越来越多的人在日常生活中携带。来自多个设备的信息可以聚合起来,以了解一个人的行为或状态。例如,一家老年护理机构可以通过将健身手环数据与整个课程的视频相结合来监测患者的跌倒情况。为了使这些聚合数据对每个人都有用,我们需要将设备的物理ID(例如,位置,持有它的用户,视觉外观)与虚拟ID(例如,IP地址/可用服务)进行多模态关联。现有的多模态关联方法通常需要有意的交互或设备的直接视线,这对于大量用户或设备被衣服遮挡时是不可行的。我们提出了一种无需校准的被动传感方法IDIoT,它将运动传感器信息与一个区域的摄像机镜头融合在一起,以估计用户携带的运动传感器的身体位置。我们描述了三个基线的结果,以突出不同的融合方法如何比早期的imu -视觉融合算法效果更好。根据这一特性,我们确定IDIoT对imu -视觉匹配系统中经常出现的缺失帧或校准错误等错误具有更强的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
IDIoT: Multimodal Framework for Ubiquitous Identification and Assignment of Human-carried Wearable Devices
IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.20
自引率
3.70%
发文量
0
期刊最新文献
Introduction to the Special Issue on Wireless Sensing for IoT Special Issue on Wireless Sensing for IoT: A Word from the Editor-in-Chief Resilient Intermediary‐Based Key Exchange Protocol for IoT A Two-Mode, Adaptive Security Framework for Smart Home Security Applications Online learning for dynamic impending collision prediction using FMCW radar
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1