使用深度编码器-解码器学习器推断上下文偏好

IF 1.4 4区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS New Review of Hypermedia and Multimedia Pub Date : 2018-07-03 DOI:10.1080/13614568.2018.1524934
Moshe Unger, Bracha Shapira, L. Rokach, Amit Livne
{"title":"使用深度编码器-解码器学习器推断上下文偏好","authors":"Moshe Unger, Bracha Shapira, L. Rokach, Amit Livne","doi":"10.1080/13614568.2018.1524934","DOIUrl":null,"url":null,"abstract":"ABSTRACT Context-aware systems enable the sensing and analysis of user context in order to provide personalised services. Our study is part of growing research efforts examining how high-dimensional data collected from mobile devices can be utilised to infer users’ dynamic preferences that are learned over time. We suggest novel methods for inferring the category of the item liked in a specific contextual situation, by applying encoder-decoder learners (long short-term memory networks and auto encoders) on mobile sensor data. In these approaches, the encoder-decoder learners reduce the dimensionality of the contextual features to a latent representation which is learned over time. Given new contextual sensor data from a user, the latent patterns discovered from each deep learner is used to predict the liked item’s category in the given context. This can greatly enhance a variety of services, such as mobile online advertising and context-aware recommender systems. We demonstrate our contribution with a point of interest (POI) recommender system in which we label contextual situations with the items’ categories. Empirical results utilising a real world data set of contextual situations derived from mobile phones sensors log show a significant improvement (up to 73% improvement) in prediction accuracy compared with state of the art classification methods.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"24 1","pages":"262 - 290"},"PeriodicalIF":1.4000,"publicationDate":"2018-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2018.1524934","citationCount":"8","resultStr":"{\"title\":\"Inferring contextual preferences using deep encoder-decoder learners\",\"authors\":\"Moshe Unger, Bracha Shapira, L. Rokach, Amit Livne\",\"doi\":\"10.1080/13614568.2018.1524934\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Context-aware systems enable the sensing and analysis of user context in order to provide personalised services. Our study is part of growing research efforts examining how high-dimensional data collected from mobile devices can be utilised to infer users’ dynamic preferences that are learned over time. We suggest novel methods for inferring the category of the item liked in a specific contextual situation, by applying encoder-decoder learners (long short-term memory networks and auto encoders) on mobile sensor data. In these approaches, the encoder-decoder learners reduce the dimensionality of the contextual features to a latent representation which is learned over time. Given new contextual sensor data from a user, the latent patterns discovered from each deep learner is used to predict the liked item’s category in the given context. This can greatly enhance a variety of services, such as mobile online advertising and context-aware recommender systems. We demonstrate our contribution with a point of interest (POI) recommender system in which we label contextual situations with the items’ categories. Empirical results utilising a real world data set of contextual situations derived from mobile phones sensors log show a significant improvement (up to 73% improvement) in prediction accuracy compared with state of the art classification methods.\",\"PeriodicalId\":54386,\"journal\":{\"name\":\"New Review of Hypermedia and Multimedia\",\"volume\":\"24 1\",\"pages\":\"262 - 290\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2018-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/13614568.2018.1524934\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Review of Hypermedia and Multimedia\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1080/13614568.2018.1524934\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Review of Hypermedia and Multimedia","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/13614568.2018.1524934","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 8

摘要

上下文感知系统能够感知和分析用户上下文,以便提供个性化服务。我们的研究是越来越多的研究工作的一部分,研究如何利用从移动设备收集的高维数据来推断用户随着时间的推移而学习的动态偏好。我们提出了在特定情境下推断喜欢的物品类别的新方法,通过对移动传感器数据应用编码器-解码器学习器(长短期记忆网络和自动编码器)。在这些方法中,编码器-解码器学习器将上下文特征的维度降低为随时间学习的潜在表征。给定来自用户的新的上下文传感器数据,从每个深度学习器中发现的潜在模式用于预测给定上下文中喜欢的物品的类别。这可以极大地增强各种服务,如移动在线广告和上下文感知推荐系统。我们用一个兴趣点(POI)推荐系统来展示我们的贡献,在这个系统中,我们用项目的类别标记上下文情况。利用来自手机传感器日志的真实世界情境数据集的实证结果显示,与最先进的分类方法相比,预测精度有显著提高(高达73%的提高)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Inferring contextual preferences using deep encoder-decoder learners
ABSTRACT Context-aware systems enable the sensing and analysis of user context in order to provide personalised services. Our study is part of growing research efforts examining how high-dimensional data collected from mobile devices can be utilised to infer users’ dynamic preferences that are learned over time. We suggest novel methods for inferring the category of the item liked in a specific contextual situation, by applying encoder-decoder learners (long short-term memory networks and auto encoders) on mobile sensor data. In these approaches, the encoder-decoder learners reduce the dimensionality of the contextual features to a latent representation which is learned over time. Given new contextual sensor data from a user, the latent patterns discovered from each deep learner is used to predict the liked item’s category in the given context. This can greatly enhance a variety of services, such as mobile online advertising and context-aware recommender systems. We demonstrate our contribution with a point of interest (POI) recommender system in which we label contextual situations with the items’ categories. Empirical results utilising a real world data set of contextual situations derived from mobile phones sensors log show a significant improvement (up to 73% improvement) in prediction accuracy compared with state of the art classification methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
New Review of Hypermedia and Multimedia
New Review of Hypermedia and Multimedia COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
3.40
自引率
0.00%
发文量
4
审稿时长
>12 weeks
期刊介绍: The New Review of Hypermedia and Multimedia (NRHM) is an interdisciplinary journal providing a focus for research covering practical and theoretical developments in hypermedia, hypertext, and interactive multimedia.
期刊最新文献
Geo-spatial hypertext in virtual reality: mapping and navigating global news event spaces User-centred collecting for emerging formats The evolution of the author—authorship and speculative worldbuilding in Johannes Heldén’s Evolution From “screen-as-writing” theory to Internet culturology. A French perspective on digital textualities Pasifika arts Aotearoa and Wikipedia
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1