The space between the images

L. Guibas
{"title":"The space between the images","authors":"L. Guibas","doi":"10.1145/2502081.2509857","DOIUrl":null,"url":null,"abstract":"Multimedia content has become a ubiquitous presence on all our computing devices, spanning the gamut from live content captured by device sensors such as smartphone cameras to immense databases of images, audio and video stored in the cloud. As we try to maximize the utility and value of all these petabytes of content, we often do so by analyzing each piece of data individually and foregoing a deeper analysis of the relationships between the media. Yet with more and more data, there will be more and more connections and correlations, because the data captured comes from the same or similar objects, or because of particular repetitions, symmetries or other relations and self-relations that the data sources satisfy. This is particularly true for media of a geometric character, such as GPS traces, images, videos, 3D scans, 3D models, etc. In this talk we focus on the \"space between the images\", that is on expressing the relationships between different mutlimedia data items. We aim to make such relationships explicit, tangible, first-class objects that themselves can be analyzed, stored, and queried -- irrespective of the media they originate from. We discuss mathematical and algorithmic issues on how to represent and compute relationships or mappings between media data sets at multiple levels of detail. We also show how to analyze and leverage networks of maps and relationships, small and large, between inter-related data. The network can act as a regularizer, allowing us to to benefit from the \"wisdom of the collection\" in performing operations on individual data sets or in map inference between them. We will illustrate these ideas using examples from the realm of 2D images and 3D scans/shapes -- but these notions are more generally applicable to the analysis of videos, graphs, acoustic data, biological data such as microarrays, homeworks in MOOCs, etc. This is an overview of joint work with multiple collaborators, as will be discussed in the talk.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2502081.2509857","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Multimedia content has become a ubiquitous presence on all our computing devices, spanning the gamut from live content captured by device sensors such as smartphone cameras to immense databases of images, audio and video stored in the cloud. As we try to maximize the utility and value of all these petabytes of content, we often do so by analyzing each piece of data individually and foregoing a deeper analysis of the relationships between the media. Yet with more and more data, there will be more and more connections and correlations, because the data captured comes from the same or similar objects, or because of particular repetitions, symmetries or other relations and self-relations that the data sources satisfy. This is particularly true for media of a geometric character, such as GPS traces, images, videos, 3D scans, 3D models, etc. In this talk we focus on the "space between the images", that is on expressing the relationships between different mutlimedia data items. We aim to make such relationships explicit, tangible, first-class objects that themselves can be analyzed, stored, and queried -- irrespective of the media they originate from. We discuss mathematical and algorithmic issues on how to represent and compute relationships or mappings between media data sets at multiple levels of detail. We also show how to analyze and leverage networks of maps and relationships, small and large, between inter-related data. The network can act as a regularizer, allowing us to to benefit from the "wisdom of the collection" in performing operations on individual data sets or in map inference between them. We will illustrate these ideas using examples from the realm of 2D images and 3D scans/shapes -- but these notions are more generally applicable to the analysis of videos, graphs, acoustic data, biological data such as microarrays, homeworks in MOOCs, etc. This is an overview of joint work with multiple collaborators, as will be discussed in the talk.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
图像之间的空间
多媒体内容已经成为我们所有计算设备上无处不在的存在,从智能手机相机等设备传感器捕捉的实时内容到存储在云中的海量图像、音频和视频数据库,无所不包。当我们试图最大化所有这些pb级内容的效用和价值时,我们通常通过单独分析每个数据块来实现,并对媒体之间的关系进行更深入的分析。然而,随着数据越来越多,将会有越来越多的连接和相关性,因为捕获的数据来自相同或相似的对象,或者因为数据源满足的特定重复,对称性或其他关系和自关系。这对于具有几何特征的媒体尤其如此,例如GPS跟踪、图像、视频、3D扫描、3D模型等。在这次演讲中,我们将重点讨论“图像之间的空间”,即不同多媒体数据项之间的关系。我们的目标是使这种关系明确,有形,一流的对象,它们本身可以分析,存储和查询-无论它们来自哪种媒体。我们讨论了如何在多个细节层次上表示和计算媒体数据集之间的关系或映射的数学和算法问题。我们还展示了如何分析和利用地图网络以及相互关联数据之间的大小关系。网络可以作为一个正则化器,允许我们在对单个数据集执行操作或在它们之间进行映射推理时受益于“集合的智慧”。我们将使用2D图像和3D扫描/形状领域的示例来说明这些想法-但这些概念更普遍适用于视频,图形,声学数据,生物数据(如微阵列)的分析,mooc的作业等。这是一个与多个合作者共同工作的概述,将在讲座中讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Summary abstract for the 1st ACM international workshop on personal data meets distributed multimedia πLDA: document clustering with selective structural constraints Massive-scale multimedia semantic modeling OTMedia: the French TransMedia news observatory Orchestration: tv-like mixing grammars applied to video-communication for social groups
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1