Sound collection and visualization system enabled participatory and opportunistic sensing approaches

Sunao Hara, M. Abe, N. Sonehara
{"title":"Sound collection and visualization system enabled participatory and opportunistic sensing approaches","authors":"Sunao Hara, M. Abe, N. Sonehara","doi":"10.1109/PERCOMW.2015.7134069","DOIUrl":null,"url":null,"abstract":"This paper presents a sound collection system to visualize environmental sounds that are collected using a crowd-sourcing approach. An analysis of physical features is generally used to analyze sound properties; however, human beings not only analyze but also emotionally connect to sounds. If we want to visualize the sounds according to the characteristics of the listener, we need to collect not only the raw sound, but also the subjective feelings associated with them. For this purpose, we developed a sound collection system using a crowdsourcing approach to collect physical sounds, their statistics, and subjective evaluations simultaneously. We then conducted a sound collection experiment using the developed system on ten participants. We collected 6,257 samples of equivalent loudness levels and their locations, and 516 samples of sounds and their locations. Subjective evaluations by the participants are also included in the data. Next, we tried to visualize the sound on a map. The loudness levels are visualized as a color map and the sounds are visualized as icons which indicate the sound type. Finally, we conducted a discrimination experiment on the sound to implement a function of automatic conversion from sounds to appropriate icons. The classifier is trained on the basis of the GMM-UBM (Gaussian Mixture Model and Universal Background Model) method. Experimental results show that the F-measure is 0.52 and the AUC is 0.79.","PeriodicalId":180959,"journal":{"name":"2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOMW.2015.7134069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This paper presents a sound collection system to visualize environmental sounds that are collected using a crowd-sourcing approach. An analysis of physical features is generally used to analyze sound properties; however, human beings not only analyze but also emotionally connect to sounds. If we want to visualize the sounds according to the characteristics of the listener, we need to collect not only the raw sound, but also the subjective feelings associated with them. For this purpose, we developed a sound collection system using a crowdsourcing approach to collect physical sounds, their statistics, and subjective evaluations simultaneously. We then conducted a sound collection experiment using the developed system on ten participants. We collected 6,257 samples of equivalent loudness levels and their locations, and 516 samples of sounds and their locations. Subjective evaluations by the participants are also included in the data. Next, we tried to visualize the sound on a map. The loudness levels are visualized as a color map and the sounds are visualized as icons which indicate the sound type. Finally, we conducted a discrimination experiment on the sound to implement a function of automatic conversion from sounds to appropriate icons. The classifier is trained on the basis of the GMM-UBM (Gaussian Mixture Model and Universal Background Model) method. Experimental results show that the F-measure is 0.52 and the AUC is 0.79.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
声音收集和可视化系统使参与性和机会感测方法成为可能
本文提出了一个声音收集系统,以可视化的环境声音,收集使用众包的方法。物理特征分析通常用于分析声音特性;然而,人类不仅会分析声音,还会在情感上与声音联系起来。如果我们想根据听者的特点将声音形象化,我们不仅需要收集原始声音,还需要收集与之相关的主观感受。为此,我们开发了一个声音收集系统,使用众包方法同时收集物理声音、它们的统计数据和主观评价。然后,我们使用开发的系统对10名参与者进行了声音收集实验。我们收集了6257个等效响度水平及其位置的样本,以及516个声音及其位置的样本。参与者的主观评价也包含在数据中。接下来,我们尝试在地图上可视化声音。响度级别被可视化为彩色地图,声音被可视化为指示声音类型的图标。最后,我们对声音进行了识别实验,实现了声音到相应图标的自动转换功能。该分类器是基于GMM-UBM(高斯混合模型和通用背景模型)方法训练的。实验结果表明,f值为0.52,AUC为0.79。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A sensing coverage analysis of a route control method for vehicular crowd sensing Next place prediction by understanding mobility patterns AgriAcT: Agricultural Activity Training using multimedia and wearable sensing A concept for a C2X-based crossroad assistant RuPS: Rural participatory sensing with rewarding mechanisms for crop monitoring
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1