首页 > 最新文献

WISMM '14最新文献

英文 中文
A Secure P2P Architecture for Video Distribution 一种安全的P2P视频分发体系结构
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661731
F. A. López-Fuentes, Carlos Alberto Orta-Cruz
Currently, video demand has increased significantly, as well as sites that provide this type of service. There is also a wide range of devices from which video requests are performed. Due to these facts, video providers use different video coding techniques that allow adapting the video both to variable conditions of the network as to heterogeneity of devices. A vast majority of video applications are based on the client-server model, which means that system maintenance is very expensive. An alternative to client-server model are the P2P (peer-to-peer) networks, which have attractive features for broadcast video, such as scalability and low cost of implementation. However, a major limitation to the use of P2P infrastructures for content distribution is security. This is because most of the sites do not consider authentication methods and contents protection. This paper proposes a P2P architecture for video distribution using scalable video coding techniques and security strategies such as encryption and authentication.
目前,视频需求显著增加,提供此类服务的网站也在增加。还有各种各样的设备可以执行视频请求。由于这些事实,视频提供商使用不同的视频编码技术,以使视频适应网络的可变条件和设备的异构性。绝大多数视频应用程序都基于客户机-服务器模型,这意味着系统维护非常昂贵。客户机-服务器模型的另一种替代方案是P2P(点对点)网络,它对广播视频具有吸引人的特性,例如可伸缩性和低实现成本。然而,使用P2P基础设施进行内容分发的一个主要限制是安全性。这是因为大多数网站没有考虑身份验证方法和内容保护。本文提出了一种采用可扩展视频编码技术和加密、认证等安全策略的P2P视频分发体系结构。
{"title":"A Secure P2P Architecture for Video Distribution","authors":"F. A. López-Fuentes, Carlos Alberto Orta-Cruz","doi":"10.1145/2661714.2661731","DOIUrl":"https://doi.org/10.1145/2661714.2661731","url":null,"abstract":"Currently, video demand has increased significantly, as well as sites that provide this type of service. There is also a wide range of devices from which video requests are performed. Due to these facts, video providers use different video coding techniques that allow adapting the video both to variable conditions of the network as to heterogeneity of devices. A vast majority of video applications are based on the client-server model, which means that system maintenance is very expensive. An alternative to client-server model are the P2P (peer-to-peer) networks, which have attractive features for broadcast video, such as scalability and low cost of implementation. However, a major limitation to the use of P2P infrastructures for content distribution is security. This is because most of the sites do not consider authentication methods and contents protection. This paper proposes a P2P architecture for video distribution using scalable video coding techniques and security strategies such as encryption and authentication.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125125977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Monitoring of User Generated Video Broadcasting Services 监控用户生成的视频广播服务
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661726
Denny Stohr, Stefan Wilk, W. Effelsberg
Mobile video broadcasting services offer users the opportunity to instantly share content from their mobile handhelds to a large audience over the Internet. However, existing data caps in cellular network contracts and limitations in their upload capabilities restrict the adoption of mobile video broadcasting services. Additionally, the quality of those video streams is often reduced by the lack of skills of recording users and the technical limitations of the video capturing devices. Our research focuses on large-scale events that attract dozens of users to record video in parallel. In many cases, available network infrastructure is not capable to upload all video streams in parallel. To make decisions on how to appropriately transmit those video streams, a suitable monitoring of the video generation process is required. For this scenario, a measurement framework is proposed that allows Internet-scale mobile broadcasting services to deliver samples in an optimized way. Our framework architecture analyzes three zones for effectively monitoring user-generated video. Besides classical Quality of Service metrics on the network state, video quality indicators and additional auxiliary sensor information is gathered. Aim of this framework is an efficient coordination of devices and their uploads based on the currently observed system state.
移动视频广播服务为用户提供了从他们的移动手持设备通过互联网向大量观众即时分享内容的机会。然而,蜂窝网络合同中的现有数据上限及其上传能力的限制限制了移动视频广播服务的采用。此外,这些视频流的质量往往由于录制用户缺乏技能和视频捕捉设备的技术限制而降低。我们的研究重点是吸引数十名用户同时录制视频的大型活动。在许多情况下,可用的网络基础设施无法并行上传所有视频流。为了决定如何适当地传输这些视频流,需要对视频生成过程进行适当的监控。针对这种情况,提出了一种测量框架,允许互联网规模的移动广播服务以优化的方式提供样本。我们的框架架构分析了有效监控用户生成视频的三个区域。除了经典的网络状态服务质量指标外,还收集了视频质量指标和附加的辅助传感器信息。该框架的目的是基于当前观察到的系统状态有效地协调设备及其上传。
{"title":"Monitoring of User Generated Video Broadcasting Services","authors":"Denny Stohr, Stefan Wilk, W. Effelsberg","doi":"10.1145/2661714.2661726","DOIUrl":"https://doi.org/10.1145/2661714.2661726","url":null,"abstract":"Mobile video broadcasting services offer users the opportunity to instantly share content from their mobile handhelds to a large audience over the Internet. However, existing data caps in cellular network contracts and limitations in their upload capabilities restrict the adoption of mobile video broadcasting services. Additionally, the quality of those video streams is often reduced by the lack of skills of recording users and the technical limitations of the video capturing devices. Our research focuses on large-scale events that attract dozens of users to record video in parallel. In many cases, available network infrastructure is not capable to upload all video streams in parallel. To make decisions on how to appropriately transmit those video streams, a suitable monitoring of the video generation process is required. For this scenario, a measurement framework is proposed that allows Internet-scale mobile broadcasting services to deliver samples in an optimized way. Our framework architecture analyzes three zones for effectively monitoring user-generated video. Besides classical Quality of Service metrics on the network state, video quality indicators and additional auxiliary sensor information is gathered. Aim of this framework is an efficient coordination of devices and their uploads based on the currently observed system state.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117309148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Bridging the User Intention Gap: an Intelligent and Interactive Multidimensional Music Search Engine 弥合用户意向差距:一个智能和交互式多维音乐搜索引擎
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661720
Shenggao Zhu, Jingli Cai, Jiangang Zhang, Zhonghua Li, Ju-Chiang Wang, Ye Wang
Music is inherently abstract and multidimensional. However, existing music search engines are usually not convenient or too complicated for users to create multidimensional music queries, leading to the intention gap between users' music information needs and the input queries. In this paper, we present a novel content-based music search engine, the Intelligent & Interactive Multidimensional mUsic Search Engine (i2MUSE), which enables users to input music queries with multiple dimensions efficiently and effectively. Six musical dimensions, including tempo, beat strength, genre, mood, instrument, and vocal, are explored in this study. Users can begin a query from any dimension and interact with the system to organize the query. Once the parameters of some dimensions have been set, i2MUSE is able to intelligently highlight a suggested parameter and gray out an un-suggested parameter for every other dimension, helping users express their music intentions and avoid parameter conflicts in the query. In addition, i2MUSE provides a real-time illustration of the percentage of matched tracks in the database. Users can also set the relative weight of each specified dimension. We have conducted a pilot user study with 30 subjects and validated the effectiveness and usability of i2MUSE.
音乐本质上是抽象和多维的。然而,现有的音乐搜索引擎通常不方便或过于复杂,用户无法创建多维音乐查询,导致用户的音乐信息需求与输入查询之间存在意图差距。本文提出了一种新颖的基于内容的音乐搜索引擎——智能交互式多维音乐搜索引擎(i2MUSE),使用户能够高效地输入多维度的音乐查询。本研究探讨了六个音乐维度,包括节奏、节拍强度、体裁、情绪、乐器和声乐。用户可以从任何维度开始查询,并与系统交互以组织查询。一旦设置了某些维度的参数,i2MUSE就能够智能地突出显示建议的参数,并为其他每个维度显示未建议的参数,帮助用户表达他们的音乐意图,并避免查询中的参数冲突。此外,i2MUSE还提供了数据库中匹配轨道百分比的实时说明。用户还可以设置每个指定维度的相对权重。我们对30名受试者进行了试点用户研究,验证了i2MUSE的有效性和可用性。
{"title":"Bridging the User Intention Gap: an Intelligent and Interactive Multidimensional Music Search Engine","authors":"Shenggao Zhu, Jingli Cai, Jiangang Zhang, Zhonghua Li, Ju-Chiang Wang, Ye Wang","doi":"10.1145/2661714.2661720","DOIUrl":"https://doi.org/10.1145/2661714.2661720","url":null,"abstract":"Music is inherently abstract and multidimensional. However, existing music search engines are usually not convenient or too complicated for users to create multidimensional music queries, leading to the intention gap between users' music information needs and the input queries. In this paper, we present a novel content-based music search engine, the Intelligent & Interactive Multidimensional mUsic Search Engine (i2MUSE), which enables users to input music queries with multiple dimensions efficiently and effectively. Six musical dimensions, including tempo, beat strength, genre, mood, instrument, and vocal, are explored in this study. Users can begin a query from any dimension and interact with the system to organize the query. Once the parameters of some dimensions have been set, i2MUSE is able to intelligently highlight a suggested parameter and gray out an un-suggested parameter for every other dimension, helping users express their music intentions and avoid parameter conflicts in the query. In addition, i2MUSE provides a real-time illustration of the percentage of matched tracks in the database. Users can also set the relative weight of each specified dimension. We have conducted a pilot user study with 30 subjects and validated the effectiveness and usability of i2MUSE.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121383196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Storytelling by Extracting Social Information from OSN Photo's Metadata 从OSN照片元数据中提取社会信息实现故事叙述
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661721
M. Saini, Fatimah Al-Zamzami, Abdulmotaleb El Saddik
The popularity of online social networks (OSNs) is growing rapidly over time. People share their experiences with their friends and relatives with the help of multimedia such as image, video, text, etc. The amount of such shared multimedia is also growing likewise. The large amount of multimedia data on OSNs contains in it a snapshot of user's life. This social network data can be crawled to build stories about individuals. However, the information needed for a story, such as events and pictures, is not fully available on user's own profile. While part of this information can be retrieved from user's own timeline, a large amount of event and multimedia information is only available on friend's profiles. As the number of friends can be very large, in this work we focus on identifying subset of friends for enriching the story data. In this paper we explore social relationships from multimedia perspective and propose a framework to build stories using information from multiple-profiles. To the best of our knowledge, this is the first work on building stories using multiple OSN profiles. The experimental results show that with the proposed method we get more information (events, locations, and photos) about the individuals in comparison to the traditional methods that rely on user's own profile alone.
随着时间的推移,在线社交网络(osn)的普及程度正在迅速增长。人们通过图片、视频、文字等多媒体与亲朋好友分享自己的经历。这种共享多媒体的数量也同样在增长。存储在osn上的海量多媒体数据是用户生命的快照。这些社交网络数据可以被抓取来构建关于个人的故事。然而,一个故事所需要的信息,比如事件和图片,在用户自己的个人资料中是不完全可用的。虽然这些信息的一部分可以从用户自己的时间轴中检索,但大量的事件和多媒体信息只能在朋友的个人资料中获得。由于朋友的数量可能非常大,在这项工作中,我们专注于识别朋友的子集,以丰富故事数据。在本文中,我们从多媒体的角度探讨社会关系,并提出了一个框架来构建故事,使用来自多个侧面的信息。据我们所知,这是第一个使用多个OSN配置文件构建故事的工作。实验结果表明,与仅依赖用户个人资料的传统方法相比,该方法可以获得更多关于个人的信息(事件、位置和照片)。
{"title":"Towards Storytelling by Extracting Social Information from OSN Photo's Metadata","authors":"M. Saini, Fatimah Al-Zamzami, Abdulmotaleb El Saddik","doi":"10.1145/2661714.2661721","DOIUrl":"https://doi.org/10.1145/2661714.2661721","url":null,"abstract":"The popularity of online social networks (OSNs) is growing rapidly over time. People share their experiences with their friends and relatives with the help of multimedia such as image, video, text, etc. The amount of such shared multimedia is also growing likewise. The large amount of multimedia data on OSNs contains in it a snapshot of user's life. This social network data can be crawled to build stories about individuals. However, the information needed for a story, such as events and pictures, is not fully available on user's own profile. While part of this information can be retrieved from user's own timeline, a large amount of event and multimedia information is only available on friend's profiles. As the number of friends can be very large, in this work we focus on identifying subset of friends for enriching the story data. In this paper we explore social relationships from multimedia perspective and propose a framework to build stories using information from multiple-profiles. To the best of our knowledge, this is the first work on building stories using multiple OSN profiles. The experimental results show that with the proposed method we get more information (events, locations, and photos) about the individuals in comparison to the traditional methods that rely on user's own profile alone.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Automatic Video Intro and Outro Detection on Internet Television 网络电视视频输入输出自动检测
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661729
Maryam Nematollahi, Xiao-Ping Zhang
Content Delivery Networks aim to deliver multimedia content to end-users with high reliability and speed. However, the transmission costs are very high due to large volume of video data. To cost-effectively deliver bandwidth-intensive video data, content providers have become interested in detection of redundant content that most probably are not of user's interest and then providing options for stopping their delivery. In this work, we target intro and outro (IO) segments of a video which are traditionally duplicated in all episodes of a TV show and most viewers fast-forward to skip them and only watch the main story. Using computationally-efficient features such as silence gaps, blank screen transitions and histogram of shot boundaries, we develop a framework that identifies intro and outro parts of a show. We test the proposed intro/outro detection methods on a large number of videos. Performance analysis shows that our algorithm successfully delineates intro and outro transitions, respectively, by a detection rate of 82% and 76% and an average error of less than 2.06 seconds.
内容交付网络旨在以高可靠性和高速度向最终用户交付多媒体内容。但是由于视频数据量大,传输成本非常高。为了经济有效地交付带宽密集型视频数据,内容提供商开始对检测最可能不是用户感兴趣的冗余内容感兴趣,然后提供停止其交付的选项。在这项工作中,我们的目标是视频的介绍和结尾(IO)部分,这些部分传统上在电视节目的所有剧集中都是重复的,大多数观众会快进跳过它们,只看主要故事。利用计算效率高的特征,如沉默间隙、空白屏幕过渡和镜头边界直方图,我们开发了一个框架,用于识别节目的介绍和结尾部分。我们在大量视频上测试了所提出的intro/ outo检测方法。性能分析表明,我们的算法分别以82%和76%的检测率和小于2.06秒的平均误差成功地描绘了引入和输出过渡。
{"title":"Automatic Video Intro and Outro Detection on Internet Television","authors":"Maryam Nematollahi, Xiao-Ping Zhang","doi":"10.1145/2661714.2661729","DOIUrl":"https://doi.org/10.1145/2661714.2661729","url":null,"abstract":"Content Delivery Networks aim to deliver multimedia content to end-users with high reliability and speed. However, the transmission costs are very high due to large volume of video data. To cost-effectively deliver bandwidth-intensive video data, content providers have become interested in detection of redundant content that most probably are not of user's interest and then providing options for stopping their delivery. In this work, we target intro and outro (IO) segments of a video which are traditionally duplicated in all episodes of a TV show and most viewers fast-forward to skip them and only watch the main story. Using computationally-efficient features such as silence gaps, blank screen transitions and histogram of shot boundaries, we develop a framework that identifies intro and outro parts of a show. We test the proposed intro/outro detection methods on a large number of videos. Performance analysis shows that our algorithm successfully delineates intro and outro transitions, respectively, by a detection rate of 82% and 76% and an average error of less than 2.06 seconds.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129192516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Student Performance Evaluation of Multimodal Learning via a Vector Space Model 基于向量空间模型的多模态学习学生绩效评价
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661723
Subhasree Basu, Yi Yu, Roger Zimmermann
Multimodal learning, as an effective method to helping students understand complex concepts, has attracted much research interest recently. Our motivation of this work is very intuitive: we want to evaluate student performance of multimodal learning over the Internet. We are developing a system for student performance evaluation which can automatically collect student-generated multimedia data during online multimodal learning and analyze student performance. As our initial step, we propose to make use of a vector space model to process student-generated multimodal data, aiming at evaluating student performance by exploring all annotation information. In particular, the area of a study material is represented as a 2-dimensional grid and predefined attributes form an attribute space. Then, annotations generated by students are mapped to a 3-dimensional indicator matrix, 2-dimensions corresponding to object positions in the grid of the study material and a third dimension recording attributes of objects. Then, recall, precision and Jaccard index are used as metrics to evaluate student performance, given the teacher's analysis as the ground truth. We applied our scheme to real datasets generated by students and teachers in two schools. The results are encouraging and confirm the effectiveness of the proposed approach to student performance evaluation in multimodal learning.
多模态学习作为一种帮助学生理解复杂概念的有效方法,近年来引起了人们的广泛关注。我们做这项工作的动机非常直观:我们想评估学生在互联网上多模式学习的表现。我们正在开发一个学生成绩评估系统,该系统可以自动收集学生在在线多模式学习过程中生成的多媒体数据,并对学生的成绩进行分析。作为我们的第一步,我们建议使用向量空间模型来处理学生生成的多模态数据,旨在通过探索所有注释信息来评估学生的表现。特别是,研究材料的区域被表示为一个二维网格,预定义的属性形成一个属性空间。然后,将学生生成的注释映射到三维指标矩阵,二维对应于学习材料网格中的对象位置,三维记录对象属性。然后,召回率,精度和Jaccard指数被用作评估学生表现的指标,以教师的分析为基础。我们将我们的方案应用于两所学校的学生和教师生成的真实数据集。结果令人鼓舞,并证实了所提出的方法在多模式学习中对学生表现进行评估的有效性。
{"title":"Student Performance Evaluation of Multimodal Learning via a Vector Space Model","authors":"Subhasree Basu, Yi Yu, Roger Zimmermann","doi":"10.1145/2661714.2661723","DOIUrl":"https://doi.org/10.1145/2661714.2661723","url":null,"abstract":"Multimodal learning, as an effective method to helping students understand complex concepts, has attracted much research interest recently. Our motivation of this work is very intuitive: we want to evaluate student performance of multimodal learning over the Internet. We are developing a system for student performance evaluation which can automatically collect student-generated multimedia data during online multimodal learning and analyze student performance. As our initial step, we propose to make use of a vector space model to process student-generated multimodal data, aiming at evaluating student performance by exploring all annotation information. In particular, the area of a study material is represented as a 2-dimensional grid and predefined attributes form an attribute space. Then, annotations generated by students are mapped to a 3-dimensional indicator matrix, 2-dimensions corresponding to object positions in the grid of the study material and a third dimension recording attributes of objects. Then, recall, precision and Jaccard index are used as metrics to evaluate student performance, given the teacher's analysis as the ground truth. We applied our scheme to real datasets generated by students and teachers in two schools. The results are encouraging and confirm the effectiveness of the proposed approach to student performance evaluation in multimodal learning.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128164586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Empirical Observation of User Activities: Check-ins, Venue Photos and Tips in Foursquare 用户活动的实证观察:签到,地点照片和提示在Foursquare
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661724
Yi Yu, Suhua Tang, Roger Zimmermann, K. Aizawa
Location-based social networking platform (e.g., Foursquare), as a popular scenario of participatory sensing system that collects heterogeneous information (such as tips and photos) of venues from users, has attracted much attention recently. In this paper, we study the distribution of these information and their relationship, based on a large dataset crawled from Foursquare, which consists of 2,728,411 photos, 1,212,136 tips and 148,924,749 check-ins of 190,649 venues, contributed by 508,467 users. We analyze the distribution of user-generated check-ins, venue photos and venue tips, and show interesting category patterns and correlation among these information. In addition, we make the following observations: i) Venue photos in Foursquare are able to significantly make venues more social and popular. ii) Users share venue photos highly related to food category. iii) Category dynamics of venue photo sharing have similar patterns as that of venue tips and user check-ins at the venues. iv) Users tend to share photos rather than tips. We distribute our data and source codes under the request of research purposes (email: yi.yu.yy@gmail.com).
基于位置的社交网络平台(如Foursquare)作为一种流行的参与式传感系统场景,从用户那里收集场地的异构信息(如提示、照片等),近年来备受关注。在本文中,我们研究了这些信息的分布及其关系,基于从Foursquare抓取的大型数据集,该数据集由508,467名用户贡献的190,649个地点的2,728,411张照片,1,212,136条提示和148,924,749次签到组成。我们分析了用户生成的签到、场地照片和场地提示的分布,并展示了这些信息之间有趣的类别模式和相关性。此外,我们还观察到:1)Foursquare中的场地照片能够显著提高场地的社交性和受欢迎程度。ii)用户分享与食物类别高度相关的场地照片。iii)场馆照片分享的类别动态与场馆提示和用户在场馆签到的动态模式相似。iv)用户更倾向于分享照片而不是技巧。我们根据研究目的的要求分发我们的数据和源代码(电子邮件:yi.yu.yy@gmail.com)。
{"title":"Empirical Observation of User Activities: Check-ins, Venue Photos and Tips in Foursquare","authors":"Yi Yu, Suhua Tang, Roger Zimmermann, K. Aizawa","doi":"10.1145/2661714.2661724","DOIUrl":"https://doi.org/10.1145/2661714.2661724","url":null,"abstract":"Location-based social networking platform (e.g., Foursquare), as a popular scenario of participatory sensing system that collects heterogeneous information (such as tips and photos) of venues from users, has attracted much attention recently. In this paper, we study the distribution of these information and their relationship, based on a large dataset crawled from Foursquare, which consists of 2,728,411 photos, 1,212,136 tips and 148,924,749 check-ins of 190,649 venues, contributed by 508,467 users. We analyze the distribution of user-generated check-ins, venue photos and venue tips, and show interesting category patterns and correlation among these information. In addition, we make the following observations: i) Venue photos in Foursquare are able to significantly make venues more social and popular. ii) Users share venue photos highly related to food category. iii) Category dynamics of venue photo sharing have similar patterns as that of venue tips and user check-ins at the venues. iv) Users tend to share photos rather than tips. We distribute our data and source codes under the request of research purposes (email: yi.yu.yy@gmail.com).","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Genre-based Analysis of Social Media Data on Music Listening Behavior: Are Fans of Classical Music Really Averse to Social Media? 基于体裁的社交媒体数据对音乐聆听行为的分析:古典音乐迷真的厌恶社交媒体吗?
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661717
M. Schedl, M. Tkalcic
It is frequently presumed that lovers of Classical music are not present in social media. In this paper, we investigate whether this statement can be empirically verified. To this end, we compare two social media platforms --- Last.fm and Twitter --- and perform a study on musical preference of their respective users. We investigate two research hypotheses: (i) Classical music fan are more reluctant to use social media to indicate their listing habits than listeners of other genres and (ii) there are correlations between the use of Last.fm and Twitter to indicate music listening behavior. Both hypotheses are verified and substantial differences could be made out for Twitter users. The results of these investigations will help improve music recommendation systems for listeners with non-mainstream music taste.
人们通常认为,社交媒体上没有古典音乐爱好者。在本文中,我们研究这种说法是否可以实证验证。为此,我们比较了两个社交媒体平台——Last。fm和Twitter——并对各自用户的音乐偏好进行研究。我们调查了两个研究假设:(i)古典音乐迷比其他类型的听众更不愿意使用社交媒体来表明他们的上架习惯;(ii) Last的使用之间存在相关性。fm和Twitter来显示音乐听行为。这两种假设都得到了验证,并且可以为Twitter用户找出实质性的差异。这些调查的结果将有助于改进针对非主流音乐品味听众的音乐推荐系统。
{"title":"Genre-based Analysis of Social Media Data on Music Listening Behavior: Are Fans of Classical Music Really Averse to Social Media?","authors":"M. Schedl, M. Tkalcic","doi":"10.1145/2661714.2661717","DOIUrl":"https://doi.org/10.1145/2661714.2661717","url":null,"abstract":"It is frequently presumed that lovers of Classical music are not present in social media. In this paper, we investigate whether this statement can be empirically verified. To this end, we compare two social media platforms --- Last.fm and Twitter --- and perform a study on musical preference of their respective users. We investigate two research hypotheses: (i) Classical music fan are more reluctant to use social media to indicate their listing habits than listeners of other genres and (ii) there are correlations between the use of Last.fm and Twitter to indicate music listening behavior. Both hypotheses are verified and substantial differences could be made out for Twitter users. The results of these investigations will help improve music recommendation systems for listeners with non-mainstream music taste.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131043243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
The Influence of Audio Quality on the Popularity of Music Videos: A YouTube Case Study 音频质量对音乐视频受欢迎程度的影响:以YouTube为例
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661725
Michael Schoeffler, J. Herre
Video-sharing websites like YouTube contain many music videos. On such websites, the audio quality of these music videos can differ from poor to very good since the content is uploaded by users. The results of a previous study indicated that music videos are very popular in general among the users. This paper addresses the question whether the audio quality of music videos has an influence on user ratings. A generic system for measuring the audio quality on video-sharing websites is described. The system has been implemented and was deployed for evaluating the relationship between audio quality and video ratings on YouTube. The analysis of the results indicate that, contrary to popular expectation, the audio quality of music videos has surprisingly little influence on its appreciation by the YouTube user.
像YouTube这样的视频分享网站包含许多音乐视频。在这些网站上,这些音乐视频的音质有差有好,因为内容是由用户上传的。之前的一项研究结果表明,音乐视频在用户中普遍很受欢迎。本文探讨了音乐视频的音质是否对用户评分有影响的问题。描述了一种用于视频分享网站音频质量测量的通用系统。该系统已经实施并部署用于评估YouTube上音频质量和视频评级之间的关系。对结果的分析表明,与普遍预期相反,音乐视频的音频质量对YouTube用户的欣赏影响小得令人惊讶。
{"title":"The Influence of Audio Quality on the Popularity of Music Videos: A YouTube Case Study","authors":"Michael Schoeffler, J. Herre","doi":"10.1145/2661714.2661725","DOIUrl":"https://doi.org/10.1145/2661714.2661725","url":null,"abstract":"Video-sharing websites like YouTube contain many music videos. On such websites, the audio quality of these music videos can differ from poor to very good since the content is uploaded by users. The results of a previous study indicated that music videos are very popular in general among the users. This paper addresses the question whether the audio quality of music videos has an influence on user ratings. A generic system for measuring the audio quality on video-sharing websites is described. The system has been implemented and was deployed for evaluating the relationship between audio quality and video ratings on YouTube. The analysis of the results indicate that, contrary to popular expectation, the audio quality of music videos has surprisingly little influence on its appreciation by the YouTube user.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"92 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131619003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
#nowplaying Music Dataset: Extracting Listening Behavior from Twitter #nowplaying音乐数据集:从Twitter中提取聆听行为
Pub Date : 2014-11-07 DOI: 10.1145/2661714.2661719
Eva Zangerle, M. Pichl, W. Gassler, Günther Specht
The extraction of information from online social networks has become popular in both industry and academia as these data sources allow for innovative applications. However, in the area of music recommender systems and music information retrieval, respective data is hardly exploited. In this paper, we present the #nowplaying dataset, which leverages social media for the creation of a diverse and constantly updated dataset, which describes the music listening behavior of users. For the creation of the dataset, we rely on Twitter, which is frequently facilitated for posting which music the respective user is currently listening to. From such tweets, we extract track and artist information and further metadata. The dataset currently comprises 49 million listening events, 144,011 artists, 1,346,203 tracks and 4,150,615 users which makes it considerably larger than existing datasets.
从在线社交网络中提取信息在工业界和学术界都很流行,因为这些数据源允许创新应用。然而,在音乐推荐系统和音乐信息检索领域,各自的数据很少被利用。在本文中,我们展示了#nowplaying数据集,它利用社交媒体创建了一个多样化且不断更新的数据集,该数据集描述了用户的音乐聆听行为。对于数据集的创建,我们依赖于Twitter,它经常被用于发布各自用户当前正在听的音乐。从这些推文中,我们提取曲目和艺术家信息以及进一步的元数据。该数据集目前包含4900万收听事件,144,011名艺术家,1,346,203首歌曲和4,150,615名用户,这使得它比现有的数据集要大得多。
{"title":"#nowplaying Music Dataset: Extracting Listening Behavior from Twitter","authors":"Eva Zangerle, M. Pichl, W. Gassler, Günther Specht","doi":"10.1145/2661714.2661719","DOIUrl":"https://doi.org/10.1145/2661714.2661719","url":null,"abstract":"The extraction of information from online social networks has become popular in both industry and academia as these data sources allow for innovative applications. However, in the area of music recommender systems and music information retrieval, respective data is hardly exploited. In this paper, we present the #nowplaying dataset, which leverages social media for the creation of a diverse and constantly updated dataset, which describes the music listening behavior of users. For the creation of the dataset, we rely on Twitter, which is frequently facilitated for posting which music the respective user is currently listening to. From such tweets, we extract track and artist information and further metadata. The dataset currently comprises 49 million listening events, 144,011 artists, 1,346,203 tracks and 4,150,615 users which makes it considerably larger than existing datasets.","PeriodicalId":365687,"journal":{"name":"WISMM '14","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124215662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
期刊
WISMM '14
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1